Sep 13 00:52:05.010699 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:52:05.010721 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:05.010729 kernel: BIOS-provided physical RAM map: Sep 13 00:52:05.010735 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:52:05.010740 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:52:05.010748 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:52:05.010762 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Sep 13 00:52:05.010771 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd1fff] usable Sep 13 00:52:05.010781 kernel: BIOS-e820: [mem 0x000000003ffd2000-0x000000003fffafff] ACPI data Sep 13 00:52:05.010789 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:52:05.010796 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:52:05.010801 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:52:05.010806 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:52:05.010811 kernel: NX (Execute Disable) protection: active Sep 13 00:52:05.010825 kernel: efi: EFI v2.70 by Microsoft Sep 13 00:52:05.010836 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f340a98 RNG=0x3ffd2018 Sep 13 00:52:05.010846 kernel: random: crng init done Sep 13 00:52:05.010856 kernel: SMBIOS 3.1.0 present. Sep 13 00:52:05.010867 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:52:05.010876 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:52:05.010882 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:52:05.010887 kernel: Hyper-V Host Build:26100-10.0-1-0.1293 Sep 13 00:52:05.010894 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:52:05.010900 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:52:05.010910 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:52:05.010920 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:52:05.010931 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:52:05.010942 kernel: tsc: Detected 2793.437 MHz processor Sep 13 00:52:05.010953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:52:05.010959 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:52:05.010965 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:52:05.010970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:52:05.010978 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:52:05.010983 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:52:05.010989 kernel: Using GB pages for direct mapping Sep 13 00:52:05.010999 kernel: Secure boot disabled Sep 13 00:52:05.011010 kernel: ACPI: Early table checksum verification disabled Sep 13 00:52:05.011021 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:52:05.011032 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011042 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011060 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:52:05.011066 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:52:05.011072 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011083 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011094 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011106 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011119 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011131 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:52:05.011145 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:52:05.011153 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 13 00:52:05.011158 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:52:05.011165 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:52:05.011176 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:52:05.011187 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:52:05.011198 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:52:05.011211 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:52:05.011222 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:52:05.011228 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:52:05.011234 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:52:05.011241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:52:05.011253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:52:05.011264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:52:05.011275 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:52:05.011287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:52:05.011300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:52:05.011312 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:52:05.011323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:52:05.011334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:52:05.011346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:52:05.011357 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:52:05.011367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:52:05.011374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:52:05.011382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:52:05.011387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:52:05.011394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:52:05.011406 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:52:05.011418 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:52:05.011438 kernel: Zone ranges: Sep 13 00:52:05.011449 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:52:05.011460 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:52:05.011472 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:52:05.011485 kernel: Movable zone start for each node Sep 13 00:52:05.011496 kernel: Early memory node ranges Sep 13 00:52:05.011508 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:52:05.011522 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:52:05.011530 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd1fff] Sep 13 00:52:05.011535 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:52:05.011541 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:52:05.011546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:52:05.011554 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:52:05.011568 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:52:05.011579 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 13 00:52:05.011590 kernel: On node 0, zone DMA32: 45 pages in unavailable ranges Sep 13 00:52:05.011604 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:52:05.011610 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:52:05.011616 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:52:05.011621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:52:05.011632 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:52:05.011643 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:52:05.011656 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:52:05.011668 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:52:05.011679 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:52:05.011688 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:52:05.011694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:52:05.011700 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:52:05.011707 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:52:05.011718 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:52:05.011729 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:52:05.011743 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:52:05.011754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062375 Sep 13 00:52:05.011772 kernel: Policy zone: Normal Sep 13 00:52:05.011780 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:05.011787 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:52:05.011794 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:52:05.011805 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:52:05.011817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:52:05.011830 kernel: Memory: 8076668K/8387512K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 310584K reserved, 0K cma-reserved) Sep 13 00:52:05.011842 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:52:05.011860 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:52:05.011868 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:52:05.011874 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:52:05.011881 kernel: rcu: RCU event tracing is enabled. Sep 13 00:52:05.011887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:52:05.011897 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:52:05.011909 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:52:05.011920 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:52:05.011927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:52:05.011934 kernel: Using NULL legacy PIC Sep 13 00:52:05.011942 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:52:05.011953 kernel: Console: colour dummy device 80x25 Sep 13 00:52:05.011965 kernel: printk: console [tty1] enabled Sep 13 00:52:05.011976 kernel: printk: console [ttyS0] enabled Sep 13 00:52:05.011982 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:52:05.011990 kernel: ACPI: Core revision 20210730 Sep 13 00:52:05.011996 kernel: Failed to register legacy timer interrupt Sep 13 00:52:05.012003 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:52:05.012016 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:52:05.012028 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5586.87 BogoMIPS (lpj=2793437) Sep 13 00:52:05.012046 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:52:05.012054 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:52:05.012060 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:52:05.012071 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:52:05.012085 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:52:05.012097 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:52:05.012106 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:52:05.012112 kernel: RETBleed: Vulnerable Sep 13 00:52:05.012118 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:52:05.012126 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:52:05.012138 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:52:05.012149 kernel: active return thunk: its_return_thunk Sep 13 00:52:05.012157 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:52:05.012163 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:52:05.012172 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:52:05.012186 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:52:05.012197 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:52:05.012209 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:52:05.012215 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:52:05.012222 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:52:05.012233 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:52:05.012245 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:52:05.012257 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:52:05.012269 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:52:05.012285 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:52:05.012294 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:52:05.012307 kernel: LSM: Security Framework initializing Sep 13 00:52:05.012319 kernel: SELinux: Initializing. Sep 13 00:52:05.012331 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:52:05.012342 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:52:05.012357 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Sep 13 00:52:05.012365 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Sep 13 00:52:05.012377 kernel: signal: max sigframe size: 3632 Sep 13 00:52:05.012389 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:52:05.012405 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:52:05.012414 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:52:05.012434 kernel: x86: Booting SMP configuration: Sep 13 00:52:05.012444 kernel: .... node #0, CPUs: #1 Sep 13 00:52:05.023439 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:52:05.023465 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:52:05.023473 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:52:05.023482 kernel: smpboot: Max logical packages: 1 Sep 13 00:52:05.023494 kernel: smpboot: Total of 2 processors activated (11173.74 BogoMIPS) Sep 13 00:52:05.023502 kernel: devtmpfs: initialized Sep 13 00:52:05.023508 kernel: x86/mm: Memory block size: 128MB Sep 13 00:52:05.023519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:52:05.023529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:52:05.023536 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:52:05.023543 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:52:05.023549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:52:05.023556 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:52:05.023562 kernel: audit: type=2000 audit(1757724723.024:1): state=initialized audit_enabled=0 res=1 Sep 13 00:52:05.023572 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:52:05.023582 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:52:05.023590 kernel: cpuidle: using governor menu Sep 13 00:52:05.023601 kernel: ACPI: bus type PCI registered Sep 13 00:52:05.023608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:52:05.023614 kernel: dca service started, version 1.12.1 Sep 13 00:52:05.023621 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:52:05.023632 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:52:05.023640 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:52:05.023647 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:52:05.023659 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:52:05.023665 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:52:05.023671 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:52:05.023677 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:52:05.023687 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:52:05.023695 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:52:05.023702 kernel: ACPI: Interpreter enabled Sep 13 00:52:05.023712 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:52:05.023721 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:52:05.023727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:52:05.023741 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:52:05.023749 kernel: iommu: Default domain type: Translated Sep 13 00:52:05.023757 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:52:05.023766 kernel: vgaarb: loaded Sep 13 00:52:05.023773 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:52:05.023782 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:52:05.023790 kernel: PTP clock support registered Sep 13 00:52:05.023797 kernel: Registered efivars operations Sep 13 00:52:05.023803 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:52:05.023819 kernel: PCI: System does not support PCI Sep 13 00:52:05.023829 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:52:05.023839 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:52:05.023848 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:52:05.023855 kernel: pnp: PnP ACPI init Sep 13 00:52:05.023866 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:52:05.023872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:52:05.023879 kernel: NET: Registered PF_INET protocol family Sep 13 00:52:05.023885 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:52:05.023894 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:52:05.023900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:52:05.023906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:52:05.023913 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 00:52:05.023919 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:52:05.023927 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:52:05.023939 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:52:05.023951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:52:05.023959 kernel: NET: Registered PF_XDP protocol family Sep 13 00:52:05.023967 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:52:05.023973 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:52:05.023981 kernel: software IO TLB: mapped [mem 0x000000003aa89000-0x000000003ea89000] (64MB) Sep 13 00:52:05.023991 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:52:05.023998 kernel: Initialise system trusted keyrings Sep 13 00:52:05.024004 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:52:05.024014 kernel: Key type asymmetric registered Sep 13 00:52:05.024022 kernel: Asymmetric key parser 'x509' registered Sep 13 00:52:05.024028 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:52:05.024036 kernel: io scheduler mq-deadline registered Sep 13 00:52:05.024042 kernel: io scheduler kyber registered Sep 13 00:52:05.024050 kernel: io scheduler bfq registered Sep 13 00:52:05.024064 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:52:05.024072 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:52:05.024078 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:52:05.024088 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:52:05.024095 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:52:05.024236 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:52:05.024334 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:52:04 UTC (1757724724) Sep 13 00:52:05.024434 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:52:05.024444 kernel: intel_pstate: CPU model not supported Sep 13 00:52:05.024450 kernel: efifb: probing for efifb Sep 13 00:52:05.024459 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:52:05.024474 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:52:05.024480 kernel: efifb: scrolling: redraw Sep 13 00:52:05.024491 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:52:05.024500 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:52:05.024507 kernel: fb0: EFI VGA frame buffer device Sep 13 00:52:05.024515 kernel: pstore: Registered efi as persistent store backend Sep 13 00:52:05.024524 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:52:05.024530 kernel: Segment Routing with IPv6 Sep 13 00:52:05.024537 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:52:05.024544 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:52:05.024557 kernel: Key type dns_resolver registered Sep 13 00:52:05.024570 kernel: IPI shorthand broadcast: enabled Sep 13 00:52:05.024577 kernel: sched_clock: Marking stable (803395800, 23321900)->(1035144000, -208426300) Sep 13 00:52:05.024584 kernel: registered taskstats version 1 Sep 13 00:52:05.024592 kernel: Loading compiled-in X.509 certificates Sep 13 00:52:05.024601 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:52:05.024608 kernel: Key type .fscrypt registered Sep 13 00:52:05.024614 kernel: Key type fscrypt-provisioning registered Sep 13 00:52:05.024623 kernel: pstore: Using crash dump compression: deflate Sep 13 00:52:05.024632 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:52:05.024640 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:52:05.024649 kernel: ima: No architecture policies found Sep 13 00:52:05.024663 kernel: clk: Disabling unused clocks Sep 13 00:52:05.024673 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:52:05.024684 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:52:05.024690 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:52:05.024700 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:52:05.024708 kernel: Run /init as init process Sep 13 00:52:05.024716 kernel: with arguments: Sep 13 00:52:05.024727 kernel: /init Sep 13 00:52:05.024734 kernel: with environment: Sep 13 00:52:05.024742 kernel: HOME=/ Sep 13 00:52:05.024754 kernel: TERM=linux Sep 13 00:52:05.024761 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:52:05.024773 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:52:05.024783 systemd[1]: Detected virtualization microsoft. Sep 13 00:52:05.024793 systemd[1]: Detected architecture x86-64. Sep 13 00:52:05.024803 systemd[1]: Running in initrd. Sep 13 00:52:05.024812 systemd[1]: No hostname configured, using default hostname. Sep 13 00:52:05.024824 systemd[1]: Hostname set to . Sep 13 00:52:05.024832 systemd[1]: Initializing machine ID from random generator. Sep 13 00:52:05.024840 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:52:05.024850 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:52:05.024858 systemd[1]: Reached target cryptsetup.target. Sep 13 00:52:05.024870 systemd[1]: Reached target paths.target. Sep 13 00:52:05.024878 systemd[1]: Reached target slices.target. Sep 13 00:52:05.024887 systemd[1]: Reached target swap.target. Sep 13 00:52:05.024902 systemd[1]: Reached target timers.target. Sep 13 00:52:05.024911 systemd[1]: Listening on iscsid.socket. Sep 13 00:52:05.024921 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:52:05.024928 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:52:05.024937 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:52:05.024946 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:52:05.024960 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:52:05.024968 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:52:05.024976 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:52:05.024992 systemd[1]: Reached target sockets.target. Sep 13 00:52:05.025002 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:52:05.025016 systemd[1]: Finished network-cleanup.service. Sep 13 00:52:05.025025 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:52:05.025034 systemd[1]: Starting systemd-journald.service... Sep 13 00:52:05.025043 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:52:05.025056 systemd[1]: Starting systemd-resolved.service... Sep 13 00:52:05.025063 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:52:05.025078 systemd-journald[183]: Journal started Sep 13 00:52:05.025129 systemd-journald[183]: Runtime Journal (/run/log/journal/5957b66c71e84694a637adf4f2a4d3e9) is 8.0M, max 159.0M, 151.0M free. Sep 13 00:52:05.028751 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:52:05.045444 systemd[1]: Started systemd-journald.service. Sep 13 00:52:05.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.058980 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:52:05.079226 kernel: audit: type=1130 audit(1757724725.058:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.079450 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:52:05.084404 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:52:05.087858 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:52:05.101041 systemd-resolved[185]: Positive Trust Anchors: Sep 13 00:52:05.103694 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:52:05.101233 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:52:05.101278 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:52:05.111714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:52:05.157968 kernel: Bridge firewalling registered Sep 13 00:52:05.157996 kernel: audit: type=1130 audit(1757724725.078:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.128795 systemd-resolved[185]: Defaulting to hostname 'linux'. Sep 13 00:52:05.133997 systemd[1]: Started systemd-resolved.service. Sep 13 00:52:05.137463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:52:05.145143 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:52:05.145151 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:52:05.163690 systemd[1]: Reached target nss-lookup.target. Sep 13 00:52:05.171348 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:52:05.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.199177 dracut-cmdline[200]: dracut-dracut-053 Sep 13 00:52:05.214874 kernel: audit: type=1130 audit(1757724725.083:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.214906 kernel: audit: type=1130 audit(1757724725.086:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.234534 kernel: audit: type=1130 audit(1757724725.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.234603 kernel: SCSI subsystem initialized Sep 13 00:52:05.234617 kernel: audit: type=1130 audit(1757724725.139:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.234972 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:52:05.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.283178 kernel: audit: type=1130 audit(1757724725.163:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.283232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:52:05.295388 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:52:05.303446 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:52:05.306855 systemd-modules-load[184]: Inserted module 'dm_multipath' Sep 13 00:52:05.309173 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:52:05.313861 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:05.336524 kernel: audit: type=1130 audit(1757724725.312:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.339351 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:05.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.359448 kernel: audit: type=1130 audit(1757724725.341:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.365442 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:52:05.386447 kernel: iscsi: registered transport (tcp) Sep 13 00:52:05.415096 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:52:05.415167 kernel: QLogic iSCSI HBA Driver Sep 13 00:52:05.443811 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:52:05.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.448955 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:52:05.500450 kernel: raid6: avx512x4 gen() 43738 MB/s Sep 13 00:52:05.520438 kernel: raid6: avx512x4 xor() 9733 MB/s Sep 13 00:52:05.540440 kernel: raid6: avx512x2 gen() 43237 MB/s Sep 13 00:52:05.561439 kernel: raid6: avx512x2 xor() 27054 MB/s Sep 13 00:52:05.581438 kernel: raid6: avx512x1 gen() 43867 MB/s Sep 13 00:52:05.601438 kernel: raid6: avx512x1 xor() 24897 MB/s Sep 13 00:52:05.621438 kernel: raid6: avx2x4 gen() 34778 MB/s Sep 13 00:52:05.641438 kernel: raid6: avx2x4 xor() 9627 MB/s Sep 13 00:52:05.661438 kernel: raid6: avx2x2 gen() 34601 MB/s Sep 13 00:52:05.681439 kernel: raid6: avx2x2 xor() 21596 MB/s Sep 13 00:52:05.701444 kernel: raid6: avx2x1 gen() 26932 MB/s Sep 13 00:52:05.721438 kernel: raid6: avx2x1 xor() 17215 MB/s Sep 13 00:52:05.742438 kernel: raid6: sse2x4 gen() 10264 MB/s Sep 13 00:52:05.762438 kernel: raid6: sse2x4 xor() 5967 MB/s Sep 13 00:52:05.782438 kernel: raid6: sse2x2 gen() 10231 MB/s Sep 13 00:52:05.803438 kernel: raid6: sse2x2 xor() 6624 MB/s Sep 13 00:52:05.823438 kernel: raid6: sse2x1 gen() 9423 MB/s Sep 13 00:52:05.847033 kernel: raid6: sse2x1 xor() 5350 MB/s Sep 13 00:52:05.847049 kernel: raid6: using algorithm avx512x1 gen() 43867 MB/s Sep 13 00:52:05.847060 kernel: raid6: .... xor() 24897 MB/s, rmw enabled Sep 13 00:52:05.850550 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:52:05.871450 kernel: xor: automatically using best checksumming function avx Sep 13 00:52:05.971452 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:52:05.979589 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:52:05.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:05.983000 audit: BPF prog-id=7 op=LOAD Sep 13 00:52:05.983000 audit: BPF prog-id=8 op=LOAD Sep 13 00:52:05.984741 systemd[1]: Starting systemd-udevd.service... Sep 13 00:52:05.999215 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 13 00:52:06.003719 systemd[1]: Started systemd-udevd.service. Sep 13 00:52:06.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:06.013588 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:52:06.027975 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:52:06.055575 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:52:06.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:06.058606 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:52:06.095209 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:52:06.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:06.143442 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:52:06.171444 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:52:06.176446 kernel: AES CTR mode by8 optimization enabled Sep 13 00:52:06.180449 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:52:06.190443 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:52:06.212460 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:52:06.223445 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:52:06.228442 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:52:06.235437 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:52:06.246506 kernel: scsi host0: storvsc_host_t Sep 13 00:52:06.246760 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:52:06.253353 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 13 00:52:06.253414 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:52:06.259995 kernel: scsi host1: storvsc_host_t Sep 13 00:52:06.265749 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:52:06.272276 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:52:06.299284 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:52:06.320469 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:52:06.320659 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:52:06.320826 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:52:06.320992 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:52:06.321151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:52:06.321168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:52:06.331264 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:52:06.332228 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:52:06.332248 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:52:06.348452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:52:06.371444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:52:06.384443 kernel: hv_netvsc 6045bdfb-c9b9-6045-bdfb-c9b96045bdfb eth0: VF slot 1 added Sep 13 00:52:06.400540 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:52:06.400599 kernel: hv_pci bffb67c2-b10e-4813-83d3-f2e3ab69e371: PCI VMBus probing: Using version 0x10004 Sep 13 00:52:06.481937 kernel: hv_pci bffb67c2-b10e-4813-83d3-f2e3ab69e371: PCI host bridge to bus b10e:00 Sep 13 00:52:06.482122 kernel: pci_bus b10e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:52:06.482300 kernel: pci_bus b10e:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:52:06.482460 kernel: pci b10e:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 00:52:06.482640 kernel: pci b10e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:52:06.482785 kernel: pci b10e:00:02.0: enabling Extended Tags Sep 13 00:52:06.482934 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Sep 13 00:52:06.482952 kernel: pci b10e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b10e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 00:52:06.483101 kernel: pci_bus b10e:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:52:06.483239 kernel: pci b10e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:52:06.466075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:52:06.478270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:52:06.534091 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:52:06.545649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:52:06.552843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:52:06.559537 systemd[1]: Starting disk-uuid.service... Sep 13 00:52:06.578444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:52:06.601448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:52:06.613535 kernel: mlx5_core b10e:00:02.0: enabling device (0000 -> 0002) Sep 13 00:52:06.913075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:52:06.913100 kernel: mlx5_core b10e:00:02.0: firmware version: 16.30.5000 Sep 13 00:52:06.913270 kernel: mlx5_core b10e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 00:52:06.913437 kernel: mlx5_core b10e:00:02.0: Supported tc offload range - chains: 1, prios: 1 Sep 13 00:52:06.913596 kernel: mlx5_core b10e:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Sep 13 00:52:06.913747 kernel: hv_netvsc 6045bdfb-c9b9-6045-bdfb-c9b96045bdfb eth0: VF registering: eth1 Sep 13 00:52:06.913889 kernel: mlx5_core b10e:00:02.0 eth1: joined to eth0 Sep 13 00:52:06.922441 kernel: mlx5_core b10e:00:02.0 enP45326s1: renamed from eth1 Sep 13 00:52:07.611450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:52:07.611594 disk-uuid[561]: The operation has completed successfully. Sep 13 00:52:07.694810 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:52:07.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.694924 systemd[1]: Finished disk-uuid.service. Sep 13 00:52:07.697698 systemd[1]: Starting verity-setup.service... Sep 13 00:52:07.724441 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:52:07.812244 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:52:07.817886 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:52:07.821899 systemd[1]: Finished verity-setup.service. Sep 13 00:52:07.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.894442 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:52:07.895009 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:52:07.898720 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:52:07.903062 systemd[1]: Starting ignition-setup.service... Sep 13 00:52:07.905952 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:52:07.932895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:52:07.932950 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:52:07.932968 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:52:07.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:07.976802 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:52:07.981000 audit: BPF prog-id=9 op=LOAD Sep 13 00:52:07.983580 systemd[1]: Starting systemd-networkd.service... Sep 13 00:52:07.988288 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:52:08.009504 systemd-networkd[842]: lo: Link UP Sep 13 00:52:08.010457 systemd-networkd[842]: lo: Gained carrier Sep 13 00:52:08.011353 systemd-networkd[842]: Enumeration completed Sep 13 00:52:08.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.011442 systemd[1]: Started systemd-networkd.service. Sep 13 00:52:08.013892 systemd[1]: Reached target network.target. Sep 13 00:52:08.017281 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:52:08.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.019994 systemd[1]: Starting iscsiuio.service... Sep 13 00:52:08.027174 systemd[1]: Started iscsiuio.service. Sep 13 00:52:08.038095 iscsid[848]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:52:08.038095 iscsid[848]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:52:08.038095 iscsid[848]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:52:08.038095 iscsid[848]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:52:08.038095 iscsid[848]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:52:08.038095 iscsid[848]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:52:08.038095 iscsid[848]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:52:08.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.031948 systemd[1]: Starting iscsid.service... Sep 13 00:52:08.041781 systemd[1]: Started iscsid.service. Sep 13 00:52:08.044021 systemd[1]: Finished ignition-setup.service. Sep 13 00:52:08.055665 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:52:08.109460 kernel: mlx5_core b10e:00:02.0 enP45326s1: Link up Sep 13 00:52:08.109701 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:52:08.062715 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:52:08.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.071493 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:52:08.077504 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:52:08.083225 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:52:08.087241 systemd[1]: Reached target remote-fs.target. Sep 13 00:52:08.090477 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:52:08.103035 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:52:08.154491 kernel: hv_netvsc 6045bdfb-c9b9-6045-bdfb-c9b96045bdfb eth0: Data path switched to VF: enP45326s1 Sep 13 00:52:08.155027 systemd-networkd[842]: enP45326s1: Link UP Sep 13 00:52:08.161657 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:52:08.155136 systemd-networkd[842]: eth0: Link UP Sep 13 00:52:08.159627 systemd-networkd[842]: eth0: Gained carrier Sep 13 00:52:08.166611 systemd-networkd[842]: enP45326s1: Gained carrier Sep 13 00:52:08.177484 systemd-networkd[842]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:52:08.742944 ignition[852]: Ignition 2.14.0 Sep 13 00:52:08.742958 ignition[852]: Stage: fetch-offline Sep 13 00:52:08.743023 ignition[852]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:08.743062 ignition[852]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:08.756280 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:08.771528 ignition[852]: parsed url from cmdline: "" Sep 13 00:52:08.771628 ignition[852]: no config URL provided Sep 13 00:52:08.772715 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:52:08.772736 ignition[852]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:52:08.772742 ignition[852]: failed to fetch config: resource requires networking Sep 13 00:52:08.773194 ignition[852]: Ignition finished successfully Sep 13 00:52:08.783622 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:52:08.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.786879 systemd[1]: Starting ignition-fetch.service... Sep 13 00:52:08.797289 ignition[869]: Ignition 2.14.0 Sep 13 00:52:08.797301 ignition[869]: Stage: fetch Sep 13 00:52:08.797471 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:08.797504 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:08.812342 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:08.812579 ignition[869]: parsed url from cmdline: "" Sep 13 00:52:08.812583 ignition[869]: no config URL provided Sep 13 00:52:08.812588 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:52:08.812595 ignition[869]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:52:08.812624 ignition[869]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:52:08.878617 ignition[869]: GET result: OK Sep 13 00:52:08.878759 ignition[869]: config has been read from IMDS userdata Sep 13 00:52:08.878794 ignition[869]: parsing config with SHA512: 869bceae72a962520bd58ee5a60b61b507141800a0f1a7b83a0b44a8840d73bef849a3a28414e9b3113009cd8f50f998539ba16ae71cd1131af3c6f0d30269a9 Sep 13 00:52:08.889099 unknown[869]: fetched base config from "system" Sep 13 00:52:08.889634 ignition[869]: fetch: fetch complete Sep 13 00:52:08.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.889122 unknown[869]: fetched base config from "system" Sep 13 00:52:08.889640 ignition[869]: fetch: fetch passed Sep 13 00:52:08.889130 unknown[869]: fetched user config from "azure" Sep 13 00:52:08.889679 ignition[869]: Ignition finished successfully Sep 13 00:52:08.891117 systemd[1]: Finished ignition-fetch.service. Sep 13 00:52:08.894938 systemd[1]: Starting ignition-kargs.service... Sep 13 00:52:08.906787 ignition[875]: Ignition 2.14.0 Sep 13 00:52:08.906795 ignition[875]: Stage: kargs Sep 13 00:52:08.906916 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:08.906941 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:08.913492 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:08.914966 ignition[875]: kargs: kargs passed Sep 13 00:52:08.915008 ignition[875]: Ignition finished successfully Sep 13 00:52:08.925419 systemd[1]: Finished ignition-kargs.service. Sep 13 00:52:08.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.930385 systemd[1]: Starting ignition-disks.service... Sep 13 00:52:08.938498 ignition[881]: Ignition 2.14.0 Sep 13 00:52:08.938510 ignition[881]: Stage: disks Sep 13 00:52:08.938644 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:08.938674 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:08.945885 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:08.947253 ignition[881]: disks: disks passed Sep 13 00:52:08.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:08.949467 systemd[1]: Finished ignition-disks.service. Sep 13 00:52:08.947296 ignition[881]: Ignition finished successfully Sep 13 00:52:08.951720 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:52:08.956106 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:52:08.958420 systemd[1]: Reached target local-fs.target. Sep 13 00:52:08.960495 systemd[1]: Reached target sysinit.target. Sep 13 00:52:08.962454 systemd[1]: Reached target basic.target. Sep 13 00:52:08.967202 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:52:08.994465 systemd-fsck[889]: ROOT: clean, 629/7326000 files, 481084/7359488 blocks Sep 13 00:52:08.998988 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:52:09.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.003940 systemd[1]: Mounting sysroot.mount... Sep 13 00:52:09.021453 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:52:09.021541 systemd[1]: Mounted sysroot.mount. Sep 13 00:52:09.025277 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:52:09.037584 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:52:09.042618 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:52:09.047532 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:52:09.047572 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:52:09.055932 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:52:09.075628 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:52:09.081107 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:52:09.094442 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (899) Sep 13 00:52:09.094494 initrd-setup-root[904]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:52:09.106321 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:52:09.106373 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:52:09.106387 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:52:09.109865 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:52:09.119161 initrd-setup-root[938]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:52:09.125452 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:52:09.131523 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:52:09.282663 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:52:09.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.290207 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:52:09.290232 kernel: audit: type=1130 audit(1757724729.284:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.291261 systemd[1]: Starting ignition-mount.service... Sep 13 00:52:09.305691 systemd[1]: Starting sysroot-boot.service... Sep 13 00:52:09.311713 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:52:09.314317 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:52:09.333227 systemd[1]: Finished sysroot-boot.service. Sep 13 00:52:09.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.350632 kernel: audit: type=1130 audit(1757724729.335:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.359791 ignition[969]: INFO : Ignition 2.14.0 Sep 13 00:52:09.362592 ignition[969]: INFO : Stage: mount Sep 13 00:52:09.364560 ignition[969]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:09.364560 ignition[969]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:09.374547 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:09.374547 ignition[969]: INFO : mount: mount passed Sep 13 00:52:09.374547 ignition[969]: INFO : Ignition finished successfully Sep 13 00:52:09.377594 systemd[1]: Finished ignition-mount.service. Sep 13 00:52:09.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.398447 kernel: audit: type=1130 audit(1757724729.385:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.442124 coreos-metadata[898]: Sep 13 00:52:09.442 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:52:09.447574 coreos-metadata[898]: Sep 13 00:52:09.447 INFO Fetch successful Sep 13 00:52:09.479997 coreos-metadata[898]: Sep 13 00:52:09.479 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:52:09.493312 coreos-metadata[898]: Sep 13 00:52:09.493 INFO Fetch successful Sep 13 00:52:09.498218 coreos-metadata[898]: Sep 13 00:52:09.498 INFO wrote hostname ci-3510.3.8-n-1677b4f607 to /sysroot/etc/hostname Sep 13 00:52:09.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.499997 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:52:09.521914 kernel: audit: type=1130 audit(1757724729.504:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:09.505455 systemd[1]: Starting ignition-files.service... Sep 13 00:52:09.525114 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:52:09.539441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (977) Sep 13 00:52:09.548753 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:52:09.548791 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:52:09.548805 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:52:09.558781 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:52:09.571098 ignition[996]: INFO : Ignition 2.14.0 Sep 13 00:52:09.571098 ignition[996]: INFO : Stage: files Sep 13 00:52:09.576505 ignition[996]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:09.576505 ignition[996]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:09.586556 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:09.590677 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:52:09.597155 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:52:09.597155 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:52:09.614128 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:52:09.617591 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:52:09.627378 unknown[996]: wrote ssh authorized keys file for user: core Sep 13 00:52:09.630041 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:52:09.636538 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:52:09.640993 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:52:09.645385 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:52:09.650277 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:52:09.699766 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:52:09.888167 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:52:09.888167 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:52:09.898032 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:52:09.902393 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:52:09.906973 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:52:09.911390 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:09.915971 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem656162459" Sep 13 00:52:09.953712 ignition[996]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem656162459": device or resource busy Sep 13 00:52:09.953712 ignition[996]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem656162459", trying btrfs: device or resource busy Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem656162459" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem656162459" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem656162459" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem656162459" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:52:09.953712 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1681158293" Sep 13 00:52:09.953712 ignition[996]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1681158293": device or resource busy Sep 13 00:52:10.023463 ignition[996]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1681158293", trying btrfs: device or resource busy Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1681158293" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1681158293" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1681158293" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1681158293" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:10.023463 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:52:10.084655 systemd-networkd[842]: eth0: Gained IPv6LL Sep 13 00:52:10.401792 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 00:52:10.584004 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:52:10.584004 ignition[996]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 00:52:10.584004 ignition[996]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 00:52:10.615108 kernel: audit: type=1130 audit(1757724730.591:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(16): [started] processing unit "containerd.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(16): [finished] processing unit "containerd.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1c): [started] setting preset to enabled for "waagent.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: op(1c): [finished] setting preset to enabled for "waagent.service" Sep 13 00:52:10.615189 ignition[996]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:52:10.615189 ignition[996]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:52:10.615189 ignition[996]: INFO : files: files passed Sep 13 00:52:10.615189 ignition[996]: INFO : Ignition finished successfully Sep 13 00:52:10.732772 kernel: audit: type=1130 audit(1757724730.645:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.732811 kernel: audit: type=1130 audit(1757724730.702:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.732835 kernel: audit: type=1131 audit(1757724730.702:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.588073 systemd[1]: Finished ignition-files.service. Sep 13 00:52:10.593030 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:52:10.737617 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:52:10.610930 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:52:10.611840 systemd[1]: Starting ignition-quench.service... Sep 13 00:52:10.637191 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:52:10.684581 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:52:10.700982 systemd[1]: Finished ignition-quench.service. Sep 13 00:52:10.703171 systemd[1]: Reached target ignition-complete.target. Sep 13 00:52:10.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.737907 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:52:10.792930 kernel: audit: type=1130 audit(1757724730.763:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.792968 kernel: audit: type=1131 audit(1757724730.763:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.760486 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:52:10.760582 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:52:10.763917 systemd[1]: Reached target initrd-fs.target. Sep 13 00:52:10.788549 systemd[1]: Reached target initrd.target. Sep 13 00:52:10.792981 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:52:10.793885 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:52:10.810841 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:52:10.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.815752 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:52:10.824065 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:52:10.826385 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:52:10.830885 systemd[1]: Stopped target timers.target. Sep 13 00:52:10.835203 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:52:10.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.835364 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:52:10.839728 systemd[1]: Stopped target initrd.target. Sep 13 00:52:10.844040 systemd[1]: Stopped target basic.target. Sep 13 00:52:10.847958 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:52:10.852162 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:52:10.856369 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:52:10.861291 systemd[1]: Stopped target remote-fs.target. Sep 13 00:52:10.865395 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:52:10.869932 systemd[1]: Stopped target sysinit.target. Sep 13 00:52:10.873860 systemd[1]: Stopped target local-fs.target. Sep 13 00:52:10.877886 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:52:10.881723 systemd[1]: Stopped target swap.target. Sep 13 00:52:10.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.885647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:52:10.885811 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:52:10.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.889869 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:52:10.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.893614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:52:10.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.893760 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:52:10.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.898224 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:52:10.926897 ignition[1034]: INFO : Ignition 2.14.0 Sep 13 00:52:10.926897 ignition[1034]: INFO : Stage: umount Sep 13 00:52:10.926897 ignition[1034]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:52:10.926897 ignition[1034]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:52:10.898359 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:52:10.942385 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:52:10.942385 ignition[1034]: INFO : umount: umount passed Sep 13 00:52:10.942385 ignition[1034]: INFO : Ignition finished successfully Sep 13 00:52:10.902742 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:52:10.902854 systemd[1]: Stopped ignition-files.service. Sep 13 00:52:10.906784 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:52:10.906898 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:52:10.912487 systemd[1]: Stopping ignition-mount.service... Sep 13 00:52:10.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.925130 systemd[1]: Stopping iscsiuio.service... Sep 13 00:52:10.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.953849 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:52:10.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.954981 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:52:11.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.967192 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:52:10.970038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:52:10.970198 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:52:10.972870 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:52:11.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.973014 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:52:10.977343 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:52:11.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.977474 systemd[1]: Stopped iscsiuio.service. Sep 13 00:52:10.980072 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:52:10.980183 systemd[1]: Stopped ignition-mount.service. Sep 13 00:52:10.982854 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:52:10.982960 systemd[1]: Stopped ignition-disks.service. Sep 13 00:52:11.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.985266 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:52:11.047000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:52:10.985313 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:52:10.989144 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:52:11.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.989195 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:52:10.993712 systemd[1]: Stopped target network.target. Sep 13 00:52:11.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:10.995837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:52:10.995897 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:52:11.000257 systemd[1]: Stopped target paths.target. Sep 13 00:52:11.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.004153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:52:11.008830 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:52:11.011275 systemd[1]: Stopped target slices.target. Sep 13 00:52:11.013202 systemd[1]: Stopped target sockets.target. Sep 13 00:52:11.014140 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:52:11.014175 systemd[1]: Closed iscsid.socket. Sep 13 00:52:11.014583 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:52:11.014617 systemd[1]: Closed iscsiuio.socket. Sep 13 00:52:11.015022 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:52:11.015059 systemd[1]: Stopped ignition-setup.service. Sep 13 00:52:11.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.015612 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:52:11.016211 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:52:11.017552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:52:11.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.018100 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:52:11.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.018190 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:52:11.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.026682 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:52:11.026781 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:52:11.029817 systemd-networkd[842]: eth0: DHCPv6 lease lost Sep 13 00:52:11.129000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:52:11.039232 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:52:11.043341 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:52:11.149484 kernel: hv_netvsc 6045bdfb-c9b9-6045-bdfb-c9b96045bdfb eth0: Data path switched from VF: enP45326s1 Sep 13 00:52:11.047874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:52:11.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.047904 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:52:11.053068 systemd[1]: Stopping network-cleanup.service... Sep 13 00:52:11.056375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:52:11.056444 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:52:11.061172 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:52:11.061222 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:52:11.067055 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:52:11.069699 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:52:11.078558 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:52:11.088277 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:52:11.098196 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:52:11.098343 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:52:11.103758 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:52:11.103796 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:52:11.107907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:52:11.107943 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:52:11.112042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:52:11.112092 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:52:11.116402 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:52:11.116477 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:52:11.120956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:52:11.120999 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:52:11.125783 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:52:11.137561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:52:11.137627 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:52:11.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.151154 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:52:11.151255 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:52:11.206725 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:52:11.206827 systemd[1]: Stopped network-cleanup.service. Sep 13 00:52:11.530989 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:52:11.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.531101 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:52:11.535806 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:52:11.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:11.540177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:52:11.540250 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:52:11.545595 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:52:11.563690 systemd[1]: Switching root. Sep 13 00:52:11.567000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:52:11.567000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:52:11.568000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:52:11.568000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:52:11.568000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:52:11.591613 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:52:11.591701 iscsid[848]: iscsid shutting down. Sep 13 00:52:11.593896 systemd-journald[183]: Journal stopped Sep 13 00:52:17.148409 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:52:17.148456 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:52:17.148467 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:52:17.148478 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:52:17.148486 kernel: SELinux: policy capability open_perms=1 Sep 13 00:52:17.148498 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:52:17.148506 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:52:17.148517 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:52:17.148527 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:52:17.148535 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:52:17.148545 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:52:17.148554 systemd[1]: Successfully loaded SELinux policy in 116.320ms. Sep 13 00:52:17.148569 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.638ms. Sep 13 00:52:17.148579 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:52:17.148596 systemd[1]: Detected virtualization microsoft. Sep 13 00:52:17.148605 systemd[1]: Detected architecture x86-64. Sep 13 00:52:17.148616 systemd[1]: Detected first boot. Sep 13 00:52:17.148626 systemd[1]: Hostname set to . Sep 13 00:52:17.148637 systemd[1]: Initializing machine ID from random generator. Sep 13 00:52:17.148647 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:52:17.148658 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:52:17.148668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:17.148680 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:17.148690 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:17.148702 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:52:17.148711 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 00:52:17.148725 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:52:17.148734 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:52:17.148746 systemd[1]: Created slice system-getty.slice. Sep 13 00:52:17.148754 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:52:17.148767 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:52:17.148775 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:52:17.148787 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:52:17.148798 systemd[1]: Created slice user.slice. Sep 13 00:52:17.148810 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:52:17.148819 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:52:17.148830 systemd[1]: Set up automount boot.automount. Sep 13 00:52:17.148840 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:52:17.148852 systemd[1]: Reached target integritysetup.target. Sep 13 00:52:17.148860 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:52:17.148875 systemd[1]: Reached target remote-fs.target. Sep 13 00:52:17.148884 systemd[1]: Reached target slices.target. Sep 13 00:52:17.148897 systemd[1]: Reached target swap.target. Sep 13 00:52:17.148907 systemd[1]: Reached target torcx.target. Sep 13 00:52:17.148919 systemd[1]: Reached target veritysetup.target. Sep 13 00:52:17.148929 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:52:17.148937 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:52:17.148949 kernel: kauditd_printk_skb: 52 callbacks suppressed Sep 13 00:52:17.148957 kernel: audit: type=1400 audit(1757724736.827:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:52:17.148968 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:52:17.148978 kernel: audit: type=1335 audit(1757724736.827:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:52:17.148988 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:52:17.148998 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:52:17.149009 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:52:17.149017 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:52:17.149030 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:52:17.149040 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:52:17.149051 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:52:17.149062 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:52:17.149070 systemd[1]: Mounting media.mount... Sep 13 00:52:17.149083 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:17.149092 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:52:17.149106 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:52:17.149115 systemd[1]: Mounting tmp.mount... Sep 13 00:52:17.149127 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:52:17.149136 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:17.149148 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:52:17.149156 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:52:17.149168 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:17.149177 systemd[1]: Starting modprobe@drm.service... Sep 13 00:52:17.149189 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:17.149200 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:52:17.149208 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:17.149217 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:52:17.149229 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:52:17.149239 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:52:17.149252 kernel: fuse: init (API version 7.34) Sep 13 00:52:17.149262 systemd[1]: Starting systemd-journald.service... Sep 13 00:52:17.149273 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:52:17.149283 kernel: loop: module loaded Sep 13 00:52:17.149298 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:52:17.149306 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:52:17.149316 kernel: audit: type=1305 audit(1757724737.126:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:52:17.149327 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:52:17.149338 kernel: audit: type=1300 audit(1757724737.126:91): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc567a0590 a2=4000 a3=7ffc567a062c items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:17.149353 systemd-journald[1203]: Journal started Sep 13 00:52:17.149398 systemd-journald[1203]: Runtime Journal (/run/log/journal/dc6937cbec7e43b9aa5e4a380df29b40) is 8.0M, max 159.0M, 151.0M free. Sep 13 00:52:16.827000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:52:17.126000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:52:17.126000 audit[1203]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc567a0590 a2=4000 a3=7ffc567a062c items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:17.169951 kernel: audit: type=1327 audit(1757724737.126:91): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:52:17.126000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:52:17.179452 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:17.187667 systemd[1]: Started systemd-journald.service. Sep 13 00:52:17.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.189200 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:52:17.203117 kernel: audit: type=1130 audit(1757724737.187:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.204813 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:52:17.206774 systemd[1]: Mounted media.mount. Sep 13 00:52:17.208614 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:52:17.210752 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:52:17.212942 systemd[1]: Mounted tmp.mount. Sep 13 00:52:17.214893 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:52:17.217162 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:52:17.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.231212 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:52:17.231472 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:52:17.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.247179 kernel: audit: type=1130 audit(1757724737.216:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.247222 kernel: audit: type=1130 audit(1757724737.230:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.247647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:17.247834 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.262571 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:52:17.262822 systemd[1]: Finished modprobe@drm.service. Sep 13 00:52:17.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.277735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:17.277996 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:17.278437 kernel: audit: type=1130 audit(1757724737.246:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.278468 kernel: audit: type=1131 audit(1757724737.246:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.281467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:52:17.281731 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:52:17.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.284726 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:17.284929 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:17.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.294822 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:52:17.298375 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:52:17.301691 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:52:17.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.305201 systemd[1]: Reached target network-pre.target. Sep 13 00:52:17.309455 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:52:17.313491 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:52:17.315755 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:52:17.322969 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:52:17.326402 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:52:17.328791 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:17.330100 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:52:17.332082 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:17.333530 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:17.336863 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:52:17.341410 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:52:17.346473 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:52:17.360087 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:52:17.361315 systemd-journald[1203]: Time spent on flushing to /var/log/journal/dc6937cbec7e43b9aa5e4a380df29b40 is 41.644ms for 1082 entries. Sep 13 00:52:17.361315 systemd-journald[1203]: System Journal (/var/log/journal/dc6937cbec7e43b9aa5e4a380df29b40) is 8.0M, max 2.6G, 2.6G free. Sep 13 00:52:17.446598 systemd-journald[1203]: Received client request to flush runtime journal. Sep 13 00:52:17.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.368162 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:52:17.394477 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:52:17.449741 udevadm[1241]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:52:17.398324 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:52:17.411458 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:17.447722 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:52:17.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.606804 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:52:17.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.610722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:52:17.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:17.826316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:52:18.085629 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:52:18.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.089952 systemd[1]: Starting systemd-udevd.service... Sep 13 00:52:18.108674 systemd-udevd[1250]: Using default interface naming scheme 'v252'. Sep 13 00:52:18.421075 systemd[1]: Started systemd-udevd.service. Sep 13 00:52:18.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.425278 systemd[1]: Starting systemd-networkd.service... Sep 13 00:52:18.462238 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:52:18.478095 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:52:18.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.529736 systemd[1]: Started systemd-userdbd.service. Sep 13 00:52:18.602000 audit[1258]: AVC avc: denied { confidentiality } for pid=1258 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:52:18.627443 kernel: hv_vmbus: registering driver hv_balloon Sep 13 00:52:18.645009 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:52:18.654474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:52:18.654796 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 00:52:18.602000 audit[1258]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5650529da110 a1=f83c a2=7f2bdbaa9bc5 a3=5 items=12 ppid=1250 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:18.602000 audit: CWD cwd="/" Sep 13 00:52:18.602000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=1 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=2 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=3 name=(null) inode=15127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=4 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=5 name=(null) inode=15128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=6 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=7 name=(null) inode=15129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=8 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=9 name=(null) inode=15130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=10 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PATH item=11 name=(null) inode=15131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:52:18.602000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:52:18.697456 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 00:52:18.707651 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:52:18.707750 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:52:18.720195 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 00:52:18.720392 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 00:52:18.732929 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:52:18.737480 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:52:18.762704 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:52:18.762785 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:52:18.762812 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:52:18.693537 systemd-networkd[1256]: lo: Link UP Sep 13 00:52:18.774270 systemd-journald[1203]: Time jumped backwards, rotating. Sep 13 00:52:18.774371 kernel: mlx5_core b10e:00:02.0 enP45326s1: Link up Sep 13 00:52:18.774629 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:52:18.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.693961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:52:18.694456 systemd-networkd[1256]: lo: Gained carrier Sep 13 00:52:18.694984 systemd-networkd[1256]: Enumeration completed Sep 13 00:52:18.700491 systemd[1]: Started systemd-networkd.service. Sep 13 00:52:18.703104 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:52:18.705121 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:52:18.792202 kernel: hv_netvsc 6045bdfb-c9b9-6045-bdfb-c9b96045bdfb eth0: Data path switched to VF: enP45326s1 Sep 13 00:52:18.795365 systemd-networkd[1256]: enP45326s1: Link UP Sep 13 00:52:18.795914 systemd-networkd[1256]: eth0: Link UP Sep 13 00:52:18.795997 systemd-networkd[1256]: eth0: Gained carrier Sep 13 00:52:18.798230 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Sep 13 00:52:18.804991 systemd-networkd[1256]: enP45326s1: Gained carrier Sep 13 00:52:18.813349 systemd-networkd[1256]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:52:18.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.824680 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:52:18.828884 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:52:18.957248 lvm[1330]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:52:18.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:18.991208 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:52:18.994070 systemd[1]: Reached target cryptsetup.target. Sep 13 00:52:18.997446 systemd[1]: Starting lvm2-activation.service... Sep 13 00:52:19.002333 lvm[1332]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:52:19.021206 systemd[1]: Finished lvm2-activation.service. Sep 13 00:52:19.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.023580 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:52:19.025727 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:52:19.025758 systemd[1]: Reached target local-fs.target. Sep 13 00:52:19.027703 systemd[1]: Reached target machines.target. Sep 13 00:52:19.031459 systemd[1]: Starting ldconfig.service... Sep 13 00:52:19.039889 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:19.039967 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:19.041003 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:52:19.044150 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:52:19.048405 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:52:19.051848 systemd[1]: Starting systemd-sysext.service... Sep 13 00:52:19.061666 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1335 (bootctl) Sep 13 00:52:19.063136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:52:19.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.529652 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:52:19.537386 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:52:19.542980 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:52:19.543313 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:52:19.634211 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:52:19.698211 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:52:19.714207 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:52:19.725261 (sd-sysext)[1350]: Using extensions 'kubernetes'. Sep 13 00:52:19.725700 (sd-sysext)[1350]: Merged extensions into '/usr'. Sep 13 00:52:19.745962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:52:19.746840 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:52:19.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.750157 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:19.751489 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:52:19.756324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:19.758246 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:19.766454 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:19.777124 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:19.781743 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:19.782138 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:19.782478 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:19.787169 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:52:19.790109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:19.790318 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:19.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.793287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:19.793453 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:19.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.796533 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:19.796738 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:19.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.800782 systemd[1]: Finished systemd-sysext.service. Sep 13 00:52:19.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:19.806139 systemd[1]: Starting ensure-sysext.service... Sep 13 00:52:19.808286 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:19.808357 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:19.809699 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:52:19.818232 systemd[1]: Reloading. Sep 13 00:52:19.831251 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:52:19.872897 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:52:19.878822 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:52:19.883768 /usr/lib/systemd/system-generators/torcx-generator[1385]: time="2025-09-13T00:52:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:52:19.883801 /usr/lib/systemd/system-generators/torcx-generator[1385]: time="2025-09-13T00:52:19Z" level=info msg="torcx already run" Sep 13 00:52:19.945939 systemd-fsck[1347]: fsck.fat 4.2 (2021-01-31) Sep 13 00:52:19.945939 systemd-fsck[1347]: /dev/sda1: 790 files, 120761/258078 clusters Sep 13 00:52:19.993425 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:19.993446 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:20.008779 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:20.074574 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:52:20.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.085226 systemd[1]: Mounting boot.mount... Sep 13 00:52:20.094222 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.094455 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.095744 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:20.099157 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:20.102727 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:20.109333 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.109560 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:20.109766 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.111346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:20.111529 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:20.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.114445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:20.114595 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:20.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.117534 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:20.117676 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:20.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.120939 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:20.121090 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.123747 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.124066 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.126455 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:20.130784 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:20.135044 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:20.138517 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.138694 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:20.138837 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.143900 systemd[1]: Mounted boot.mount. Sep 13 00:52:20.151576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:20.151779 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:20.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.156595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:20.156789 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:20.159566 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:20.159774 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:20.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.166780 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.169145 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:52:20.172645 systemd[1]: Starting modprobe@drm.service... Sep 13 00:52:20.176771 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:52:20.181984 systemd[1]: Starting modprobe@loop.service... Sep 13 00:52:20.184618 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.184970 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:20.189145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:52:20.190468 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:52:20.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.194382 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:52:20.194926 systemd[1]: Finished modprobe@drm.service. Sep 13 00:52:20.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.197683 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:52:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.200490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:52:20.200695 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:52:20.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.203544 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:52:20.203803 systemd[1]: Finished modprobe@loop.service. Sep 13 00:52:20.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.207815 systemd[1]: Finished ensure-sysext.service. Sep 13 00:52:20.211439 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:52:20.211488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:52:20.356351 systemd-networkd[1256]: eth0: Gained IPv6LL Sep 13 00:52:20.361049 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:52:20.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.541613 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.541647 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:52:20.644080 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:52:20.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.648284 systemd[1]: Starting audit-rules.service... Sep 13 00:52:20.651668 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:52:20.657645 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:52:20.662168 systemd[1]: Starting systemd-resolved.service... Sep 13 00:52:20.667084 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:52:20.671395 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:52:20.674891 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:52:20.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.691000 audit[1495]: SYSTEM_BOOT pid=1495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.678093 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:52:20.695670 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:52:20.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:52:20.773818 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:52:20.787000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:52:20.787000 audit[1510]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7187d590 a2=420 a3=0 items=0 ppid=1488 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:52:20.787000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:52:20.788636 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:52:20.791309 augenrules[1510]: No rules Sep 13 00:52:20.791387 systemd[1]: Finished audit-rules.service. Sep 13 00:52:20.793454 systemd[1]: Reached target time-set.target. Sep 13 00:52:20.822866 systemd-resolved[1493]: Positive Trust Anchors: Sep 13 00:52:20.822882 systemd-resolved[1493]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:52:20.822916 systemd-resolved[1493]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:52:20.889907 systemd-resolved[1493]: Using system hostname 'ci-3510.3.8-n-1677b4f607'. Sep 13 00:52:20.891553 systemd[1]: Started systemd-resolved.service. Sep 13 00:52:20.894257 systemd[1]: Reached target network.target. Sep 13 00:52:20.896475 systemd[1]: Reached target network-online.target. Sep 13 00:52:20.897615 systemd-timesyncd[1494]: Contacted time server 149.22.188.7:123 (0.flatcar.pool.ntp.org). Sep 13 00:52:20.897942 systemd-timesyncd[1494]: Initial clock synchronization to Sat 2025-09-13 00:52:20.895818 UTC. Sep 13 00:52:20.899078 systemd[1]: Reached target nss-lookup.target. Sep 13 00:52:21.909844 ldconfig[1334]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:52:21.920555 systemd[1]: Finished ldconfig.service. Sep 13 00:52:21.924835 systemd[1]: Starting systemd-update-done.service... Sep 13 00:52:21.934786 systemd[1]: Finished systemd-update-done.service. Sep 13 00:52:21.937052 systemd[1]: Reached target sysinit.target. Sep 13 00:52:21.939920 systemd[1]: Started motdgen.path. Sep 13 00:52:21.941787 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:52:21.944857 systemd[1]: Started logrotate.timer. Sep 13 00:52:21.946765 systemd[1]: Started mdadm.timer. Sep 13 00:52:21.948523 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:52:21.950759 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:52:21.950797 systemd[1]: Reached target paths.target. Sep 13 00:52:21.952597 systemd[1]: Reached target timers.target. Sep 13 00:52:21.955327 systemd[1]: Listening on dbus.socket. Sep 13 00:52:21.958463 systemd[1]: Starting docker.socket... Sep 13 00:52:21.967471 systemd[1]: Listening on sshd.socket. Sep 13 00:52:21.969311 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:21.969771 systemd[1]: Listening on docker.socket. Sep 13 00:52:21.971855 systemd[1]: Reached target sockets.target. Sep 13 00:52:21.973854 systemd[1]: Reached target basic.target. Sep 13 00:52:21.975856 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:52:21.975911 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:52:21.975939 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:52:21.976953 systemd[1]: Starting containerd.service... Sep 13 00:52:21.980324 systemd[1]: Starting dbus.service... Sep 13 00:52:21.984034 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:52:21.987812 systemd[1]: Starting extend-filesystems.service... Sep 13 00:52:21.990251 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:52:22.003708 systemd[1]: Starting kubelet.service... Sep 13 00:52:22.011738 systemd[1]: Starting motdgen.service... Sep 13 00:52:22.014878 systemd[1]: Started nvidia.service. Sep 13 00:52:22.022290 systemd[1]: Starting prepare-helm.service... Sep 13 00:52:22.025719 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:52:22.029509 systemd[1]: Starting sshd-keygen.service... Sep 13 00:52:22.035573 jq[1526]: false Sep 13 00:52:22.034788 systemd[1]: Starting systemd-logind.service... Sep 13 00:52:22.038757 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:52:22.038877 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:52:22.042407 systemd[1]: Starting update-engine.service... Sep 13 00:52:22.046444 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:52:22.052784 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:52:22.053111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:52:22.063356 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:52:22.063673 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:52:22.075306 jq[1542]: true Sep 13 00:52:22.093668 extend-filesystems[1527]: Found loop1 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda1 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda2 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda3 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found usr Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda4 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda6 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda7 Sep 13 00:52:22.096208 extend-filesystems[1527]: Found sda9 Sep 13 00:52:22.096208 extend-filesystems[1527]: Checking size of /dev/sda9 Sep 13 00:52:22.149302 jq[1557]: true Sep 13 00:52:22.149421 tar[1547]: linux-amd64/helm Sep 13 00:52:22.166663 extend-filesystems[1527]: Old size kept for /dev/sda9 Sep 13 00:52:22.169289 extend-filesystems[1527]: Found sr0 Sep 13 00:52:22.171564 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:52:22.171878 systemd[1]: Finished extend-filesystems.service. Sep 13 00:52:22.190724 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:52:22.191028 systemd[1]: Finished motdgen.service. Sep 13 00:52:22.213425 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:52:22.214284 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:52:22.246055 dbus-daemon[1524]: [system] SELinux support is enabled Sep 13 00:52:22.246256 systemd[1]: Started dbus.service. Sep 13 00:52:22.250751 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:52:22.250778 systemd[1]: Reached target system-config.target. Sep 13 00:52:22.253362 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:52:22.253383 systemd[1]: Reached target user-config.target. Sep 13 00:52:22.257016 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:52:22.260260 systemd-logind[1540]: New seat seat0. Sep 13 00:52:22.263715 systemd[1]: Started systemd-logind.service. Sep 13 00:52:22.301714 env[1564]: time="2025-09-13T00:52:22.301648274Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:52:22.326972 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:52:22.369343 update_engine[1541]: I0913 00:52:22.325738 1541 main.cc:92] Flatcar Update Engine starting Sep 13 00:52:22.381117 systemd[1]: Started update-engine.service. Sep 13 00:52:22.386377 systemd[1]: Started locksmithd.service. Sep 13 00:52:22.388985 update_engine[1541]: I0913 00:52:22.388867 1541 update_check_scheduler.cc:74] Next update check in 5m50s Sep 13 00:52:22.420939 env[1564]: time="2025-09-13T00:52:22.420848393Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:52:22.421223 env[1564]: time="2025-09-13T00:52:22.421204057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.424767 env[1564]: time="2025-09-13T00:52:22.424722107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:22.424888 env[1564]: time="2025-09-13T00:52:22.424872492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.425308 env[1564]: time="2025-09-13T00:52:22.425285351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:22.427263 env[1564]: time="2025-09-13T00:52:22.427238156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.427388 env[1564]: time="2025-09-13T00:52:22.427368143Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:52:22.427466 env[1564]: time="2025-09-13T00:52:22.427450735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.427649 env[1564]: time="2025-09-13T00:52:22.427633017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.428017 env[1564]: time="2025-09-13T00:52:22.427997980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:52:22.428383 env[1564]: time="2025-09-13T00:52:22.428358744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:52:22.428458 env[1564]: time="2025-09-13T00:52:22.428445136Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:52:22.428580 env[1564]: time="2025-09-13T00:52:22.428565424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:52:22.428647 env[1564]: time="2025-09-13T00:52:22.428636417Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441032781Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441072577Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441092575Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441139170Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441159868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441178267Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441250559Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441268957Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441287056Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441304154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441320252Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441337351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441449040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:52:22.443361 env[1564]: time="2025-09-13T00:52:22.441532331Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.441969188Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442000485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442017883Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442070178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442087276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442103374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442119173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442135371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442152669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442168268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442194465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442218763Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442351350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442369848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.443803 env[1564]: time="2025-09-13T00:52:22.442386946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.444295 env[1564]: time="2025-09-13T00:52:22.442410044Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:52:22.444295 env[1564]: time="2025-09-13T00:52:22.442431342Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:52:22.444295 env[1564]: time="2025-09-13T00:52:22.442446440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:52:22.444295 env[1564]: time="2025-09-13T00:52:22.442468238Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:52:22.444295 env[1564]: time="2025-09-13T00:52:22.442509134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:52:22.444457 env[1564]: time="2025-09-13T00:52:22.442766808Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:52:22.444457 env[1564]: time="2025-09-13T00:52:22.442848500Z" level=info msg="Connect containerd service" Sep 13 00:52:22.444457 env[1564]: time="2025-09-13T00:52:22.442894195Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:52:22.453638 env[1564]: time="2025-09-13T00:52:22.444754610Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:52:22.453638 env[1564]: time="2025-09-13T00:52:22.445024883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:52:22.453638 env[1564]: time="2025-09-13T00:52:22.445069479Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:52:22.453638 env[1564]: time="2025-09-13T00:52:22.448306456Z" level=info msg="containerd successfully booted in 0.158852s" Sep 13 00:52:22.445257 systemd[1]: Started containerd.service. Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460224968Z" level=info msg="Start subscribing containerd event" Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460292161Z" level=info msg="Start recovering state" Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460360455Z" level=info msg="Start event monitor" Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460372753Z" level=info msg="Start snapshots syncer" Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460384852Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:52:22.462218 env[1564]: time="2025-09-13T00:52:22.460394451Z" level=info msg="Start streaming server" Sep 13 00:52:23.028812 tar[1547]: linux-amd64/LICENSE Sep 13 00:52:23.029170 tar[1547]: linux-amd64/README.md Sep 13 00:52:23.036473 systemd[1]: Finished prepare-helm.service. Sep 13 00:52:23.399257 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:52:23.754467 systemd[1]: Started kubelet.service. Sep 13 00:52:24.356370 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:52:24.380442 systemd[1]: Finished sshd-keygen.service. Sep 13 00:52:24.384802 systemd[1]: Starting issuegen.service... Sep 13 00:52:24.388561 systemd[1]: Started waagent.service. Sep 13 00:52:24.400115 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:52:24.400407 systemd[1]: Finished issuegen.service. Sep 13 00:52:24.404389 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:52:24.426952 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:52:24.431350 systemd[1]: Started getty@tty1.service. Sep 13 00:52:24.435163 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:52:24.442015 systemd[1]: Reached target getty.target. Sep 13 00:52:24.447364 systemd[1]: Reached target multi-user.target. Sep 13 00:52:24.453070 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:52:24.465086 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:52:24.465394 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:52:24.472340 systemd[1]: Startup finished in 8.571s (kernel) + 12.308s (userspace) = 20.880s. Sep 13 00:52:24.533396 kubelet[1655]: E0913 00:52:24.533352 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:24.535020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:24.535234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:52:24.683715 login[1678]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:52:24.685117 login[1679]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:52:25.165688 systemd[1]: Created slice user-500.slice. Sep 13 00:52:25.167226 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:52:25.169865 systemd-logind[1540]: New session 1 of user core. Sep 13 00:52:25.173444 systemd-logind[1540]: New session 2 of user core. Sep 13 00:52:25.185077 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:52:25.186778 systemd[1]: Starting user@500.service... Sep 13 00:52:25.198922 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:25.346467 systemd[1686]: Queued start job for default target default.target. Sep 13 00:52:25.346780 systemd[1686]: Reached target paths.target. Sep 13 00:52:25.346806 systemd[1686]: Reached target sockets.target. Sep 13 00:52:25.346823 systemd[1686]: Reached target timers.target. Sep 13 00:52:25.346839 systemd[1686]: Reached target basic.target. Sep 13 00:52:25.346983 systemd[1]: Started user@500.service. Sep 13 00:52:25.348124 systemd[1]: Started session-1.scope. Sep 13 00:52:25.348440 systemd[1686]: Reached target default.target. Sep 13 00:52:25.348573 systemd[1686]: Startup finished in 142ms. Sep 13 00:52:25.348851 systemd[1]: Started session-2.scope. Sep 13 00:52:34.786172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:52:34.786446 systemd[1]: Stopped kubelet.service. Sep 13 00:52:34.788006 systemd[1]: Starting kubelet.service... Sep 13 00:52:35.589649 systemd[1]: Started kubelet.service. Sep 13 00:52:35.648643 kubelet[1718]: E0913 00:52:35.648595 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:35.651468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:35.651683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:52:38.439050 waagent[1671]: 2025-09-13T00:52:38.438934Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 00:52:38.506929 waagent[1671]: 2025-09-13T00:52:38.506829Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 00:52:38.509623 waagent[1671]: 2025-09-13T00:52:38.509565Z INFO Daemon Daemon Python: 3.9.16 Sep 13 00:52:38.512329 waagent[1671]: 2025-09-13T00:52:38.512261Z INFO Daemon Daemon Run daemon Sep 13 00:52:38.514995 waagent[1671]: 2025-09-13T00:52:38.514934Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 00:52:38.552349 waagent[1671]: 2025-09-13T00:52:38.552220Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 00:52:38.559918 waagent[1671]: 2025-09-13T00:52:38.559802Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.560249Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.561242Z INFO Daemon Daemon Using waagent for provisioning Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.562859Z INFO Daemon Daemon Activate resource disk Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.563671Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.572429Z INFO Daemon Daemon Found device: None Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.573298Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.574225Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.575985Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.576788Z INFO Daemon Daemon Running default provisioning handler Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.586524Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.588935Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.590011Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:52:38.607914 waagent[1671]: 2025-09-13T00:52:38.590828Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 00:52:38.870931 waagent[1671]: 2025-09-13T00:52:38.870708Z INFO Daemon Daemon Successfully mounted dvd Sep 13 00:52:39.024005 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 00:52:39.096765 waagent[1671]: 2025-09-13T00:52:39.096615Z INFO Daemon Daemon Detect protocol endpoint Sep 13 00:52:39.099996 waagent[1671]: 2025-09-13T00:52:39.099925Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:52:39.103324 waagent[1671]: 2025-09-13T00:52:39.103266Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 00:52:39.106953 waagent[1671]: 2025-09-13T00:52:39.106899Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 00:52:39.110164 waagent[1671]: 2025-09-13T00:52:39.110105Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 00:52:39.113118 waagent[1671]: 2025-09-13T00:52:39.113066Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 00:52:39.388448 waagent[1671]: 2025-09-13T00:52:39.388383Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 00:52:39.392598 waagent[1671]: 2025-09-13T00:52:39.392556Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 00:52:39.395532 waagent[1671]: 2025-09-13T00:52:39.395476Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 00:52:40.078557 waagent[1671]: 2025-09-13T00:52:40.078411Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 00:52:40.090373 waagent[1671]: 2025-09-13T00:52:40.090304Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 00:52:40.093907 waagent[1671]: 2025-09-13T00:52:40.093848Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 00:52:40.147971 waagent[1671]: 2025-09-13T00:52:40.147847Z INFO Daemon Daemon Found private key matching thumbprint 90BDC5472481D64DA4F21A4B1669FE696653F802 Sep 13 00:52:40.154378 waagent[1671]: 2025-09-13T00:52:40.148424Z INFO Daemon Daemon Fetch goal state completed Sep 13 00:52:40.168111 waagent[1671]: 2025-09-13T00:52:40.168055Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 9a2a1f5a-6406-4998-88b2-b16547b279c3 New eTag: 4831607540718056787] Sep 13 00:52:40.176426 waagent[1671]: 2025-09-13T00:52:40.168723Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 00:52:40.177079 waagent[1671]: 2025-09-13T00:52:40.177013Z INFO Daemon Daemon Starting provisioning Sep 13 00:52:40.185422 waagent[1671]: 2025-09-13T00:52:40.177318Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 00:52:40.185422 waagent[1671]: 2025-09-13T00:52:40.178440Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-1677b4f607] Sep 13 00:52:40.185980 waagent[1671]: 2025-09-13T00:52:40.185869Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-1677b4f607] Sep 13 00:52:40.195045 waagent[1671]: 2025-09-13T00:52:40.186528Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 00:52:40.195045 waagent[1671]: 2025-09-13T00:52:40.188025Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 00:52:40.201832 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 00:52:40.202128 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 00:52:40.202222 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 00:52:40.202493 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:52:40.207246 systemd-networkd[1256]: eth0: DHCPv6 lease lost Sep 13 00:52:40.208651 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:52:40.208960 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:52:40.211837 systemd[1]: Starting systemd-networkd.service... Sep 13 00:52:40.247342 systemd-networkd[1746]: enP45326s1: Link UP Sep 13 00:52:40.247353 systemd-networkd[1746]: enP45326s1: Gained carrier Sep 13 00:52:40.248682 systemd-networkd[1746]: eth0: Link UP Sep 13 00:52:40.248691 systemd-networkd[1746]: eth0: Gained carrier Sep 13 00:52:40.249090 systemd-networkd[1746]: lo: Link UP Sep 13 00:52:40.249099 systemd-networkd[1746]: lo: Gained carrier Sep 13 00:52:40.249436 systemd-networkd[1746]: eth0: Gained IPv6LL Sep 13 00:52:40.249705 systemd-networkd[1746]: Enumeration completed Sep 13 00:52:40.249836 systemd[1]: Started systemd-networkd.service. Sep 13 00:52:40.253257 waagent[1671]: 2025-09-13T00:52:40.251612Z INFO Daemon Daemon Create user account if not exists Sep 13 00:52:40.252178 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:52:40.255336 waagent[1671]: 2025-09-13T00:52:40.254664Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 00:52:40.259074 waagent[1671]: 2025-09-13T00:52:40.257975Z INFO Daemon Daemon Configure sudoer Sep 13 00:52:40.262236 systemd-networkd[1746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:52:40.264652 waagent[1671]: 2025-09-13T00:52:40.264586Z INFO Daemon Daemon Configure sshd Sep 13 00:52:40.269336 waagent[1671]: 2025-09-13T00:52:40.264879Z INFO Daemon Daemon Deploy ssh public key. Sep 13 00:52:40.276291 systemd-networkd[1746]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:52:40.279215 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:52:41.341054 waagent[1671]: 2025-09-13T00:52:41.340967Z INFO Daemon Daemon Provisioning complete Sep 13 00:52:41.353919 waagent[1671]: 2025-09-13T00:52:41.353857Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 00:52:41.362013 waagent[1671]: 2025-09-13T00:52:41.354268Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 00:52:41.362013 waagent[1671]: 2025-09-13T00:52:41.356008Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 00:52:41.622717 waagent[1753]: 2025-09-13T00:52:41.622557Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 00:52:41.623500 waagent[1753]: 2025-09-13T00:52:41.623436Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:41.623654 waagent[1753]: 2025-09-13T00:52:41.623590Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:41.634521 waagent[1753]: 2025-09-13T00:52:41.634443Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 00:52:41.634689 waagent[1753]: 2025-09-13T00:52:41.634636Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 00:52:41.680802 waagent[1753]: 2025-09-13T00:52:41.680675Z INFO ExtHandler ExtHandler Found private key matching thumbprint 90BDC5472481D64DA4F21A4B1669FE696653F802 Sep 13 00:52:41.681111 waagent[1753]: 2025-09-13T00:52:41.681055Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 00:52:41.693584 waagent[1753]: 2025-09-13T00:52:41.693523Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 015d374d-68f4-4885-add9-4d93781c71ac New eTag: 4831607540718056787] Sep 13 00:52:41.694086 waagent[1753]: 2025-09-13T00:52:41.694031Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 00:52:41.922033 waagent[1753]: 2025-09-13T00:52:41.921832Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:52:41.962212 waagent[1753]: 2025-09-13T00:52:41.962120Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1753 Sep 13 00:52:41.965554 waagent[1753]: 2025-09-13T00:52:41.965483Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:52:41.966726 waagent[1753]: 2025-09-13T00:52:41.966668Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 00:52:41.996300 waagent[1753]: 2025-09-13T00:52:41.996240Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:52:41.996683 waagent[1753]: 2025-09-13T00:52:41.996628Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:52:42.004141 waagent[1753]: 2025-09-13T00:52:42.004082Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:52:42.004629 waagent[1753]: 2025-09-13T00:52:42.004572Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 00:52:42.005683 waagent[1753]: 2025-09-13T00:52:42.005617Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 00:52:42.006927 waagent[1753]: 2025-09-13T00:52:42.006870Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:52:42.007804 waagent[1753]: 2025-09-13T00:52:42.007752Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:42.008111 waagent[1753]: 2025-09-13T00:52:42.008063Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:42.008312 waagent[1753]: 2025-09-13T00:52:42.008246Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:42.008424 waagent[1753]: 2025-09-13T00:52:42.008343Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:52:42.009294 waagent[1753]: 2025-09-13T00:52:42.009239Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:52:42.009793 waagent[1753]: 2025-09-13T00:52:42.009741Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:42.010055 waagent[1753]: 2025-09-13T00:52:42.010003Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:52:42.010055 waagent[1753]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:52:42.010055 waagent[1753]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:52:42.010055 waagent[1753]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:52:42.010055 waagent[1753]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:42.010055 waagent[1753]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:42.010055 waagent[1753]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:42.010632 waagent[1753]: 2025-09-13T00:52:42.010584Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:52:42.010718 waagent[1753]: 2025-09-13T00:52:42.010138Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:52:42.010915 waagent[1753]: 2025-09-13T00:52:42.010869Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:52:42.011392 waagent[1753]: 2025-09-13T00:52:42.011343Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:52:42.013638 waagent[1753]: 2025-09-13T00:52:42.013428Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:52:42.015692 waagent[1753]: 2025-09-13T00:52:42.015629Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:52:42.016005 waagent[1753]: 2025-09-13T00:52:42.015954Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:52:42.016892 waagent[1753]: 2025-09-13T00:52:42.016833Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:52:42.027481 waagent[1753]: 2025-09-13T00:52:42.027427Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 00:52:42.028114 waagent[1753]: 2025-09-13T00:52:42.028077Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 00:52:42.028955 waagent[1753]: 2025-09-13T00:52:42.028912Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 00:52:42.057826 waagent[1753]: 2025-09-13T00:52:42.057757Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 00:52:42.115318 waagent[1753]: 2025-09-13T00:52:42.115240Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1746' Sep 13 00:52:42.317297 waagent[1753]: 2025-09-13T00:52:42.317106Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Sep 13 00:52:42.320095 waagent[1753]: 2025-09-13T00:52:42.319987Z INFO EnvHandler ExtHandler Firewall rules: Sep 13 00:52:42.320095 waagent[1753]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:42.320095 waagent[1753]: pkts bytes target prot opt in out source destination Sep 13 00:52:42.320095 waagent[1753]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:42.320095 waagent[1753]: pkts bytes target prot opt in out source destination Sep 13 00:52:42.320095 waagent[1753]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:42.320095 waagent[1753]: pkts bytes target prot opt in out source destination Sep 13 00:52:42.320095 waagent[1753]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:52:42.320095 waagent[1753]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:52:42.321418 waagent[1753]: 2025-09-13T00:52:42.321367Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 00:52:42.358505 waagent[1753]: 2025-09-13T00:52:42.358440Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:52:42.358505 waagent[1753]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:52:42.358505 waagent[1753]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:52:42.358505 waagent[1753]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fb:c9:b9 brd ff:ff:ff:ff:ff:ff Sep 13 00:52:42.358505 waagent[1753]: 3: enP45326s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fb:c9:b9 brd ff:ff:ff:ff:ff:ff\ altname enP45326p0s2 Sep 13 00:52:42.358505 waagent[1753]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:52:42.358505 waagent[1753]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:52:42.358505 waagent[1753]: 2: eth0 inet 10.200.4.17/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:52:42.358505 waagent[1753]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:52:42.358505 waagent[1753]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 00:52:42.358505 waagent[1753]: 2: eth0 inet6 fe80::6245:bdff:fefb:c9b9/64 scope link \ valid_lft forever preferred_lft forever Sep 13 00:52:42.361591 waagent[1753]: 2025-09-13T00:52:42.361535Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 00:52:43.360595 waagent[1671]: 2025-09-13T00:52:43.360460Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 00:52:43.365502 waagent[1671]: 2025-09-13T00:52:43.365443Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 00:52:44.457817 waagent[1790]: 2025-09-13T00:52:44.457720Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 00:52:44.458574 waagent[1790]: 2025-09-13T00:52:44.458507Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 00:52:44.458729 waagent[1790]: 2025-09-13T00:52:44.458680Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 00:52:44.458878 waagent[1790]: 2025-09-13T00:52:44.458832Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 13 00:52:44.474094 waagent[1790]: 2025-09-13T00:52:44.473981Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:52:44.474538 waagent[1790]: 2025-09-13T00:52:44.474480Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:44.474712 waagent[1790]: 2025-09-13T00:52:44.474666Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:44.474930 waagent[1790]: 2025-09-13T00:52:44.474884Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 00:52:44.487407 waagent[1790]: 2025-09-13T00:52:44.487322Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:52:44.495299 waagent[1790]: 2025-09-13T00:52:44.495239Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Sep 13 00:52:44.496279 waagent[1790]: 2025-09-13T00:52:44.496219Z INFO ExtHandler Sep 13 00:52:44.496449 waagent[1790]: 2025-09-13T00:52:44.496400Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7c1476cb-7dfd-4698-b288-cdcc6a5820a9 eTag: 4831607540718056787 source: Fabric] Sep 13 00:52:44.497126 waagent[1790]: 2025-09-13T00:52:44.497072Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 00:52:44.498199 waagent[1790]: 2025-09-13T00:52:44.498133Z INFO ExtHandler Sep 13 00:52:44.498348 waagent[1790]: 2025-09-13T00:52:44.498301Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:52:44.504392 waagent[1790]: 2025-09-13T00:52:44.504340Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 00:52:44.504887 waagent[1790]: 2025-09-13T00:52:44.504840Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 00:52:44.527129 waagent[1790]: 2025-09-13T00:52:44.527063Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 00:52:44.579143 waagent[1790]: 2025-09-13T00:52:44.579015Z INFO ExtHandler Downloaded certificate {'thumbprint': '90BDC5472481D64DA4F21A4B1669FE696653F802', 'hasPrivateKey': True} Sep 13 00:52:44.580414 waagent[1790]: 2025-09-13T00:52:44.580331Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 00:52:44.581204 waagent[1790]: 2025-09-13T00:52:44.581144Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 00:52:44.596930 waagent[1790]: 2025-09-13T00:52:44.596820Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 00:52:44.604824 waagent[1790]: 2025-09-13T00:52:44.604722Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 00:52:44.608439 waagent[1790]: 2025-09-13T00:52:44.608340Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 00:52:44.608669 waagent[1790]: 2025-09-13T00:52:44.608617Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 00:52:44.623584 waagent[1790]: 2025-09-13T00:52:44.623470Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 13 00:52:44.623584 waagent[1790]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.623584 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.623584 waagent[1790]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.623584 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.623584 waagent[1790]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.623584 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.623584 waagent[1790]: 85 9425 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:52:44.623584 waagent[1790]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:52:44.624728 waagent[1790]: 2025-09-13T00:52:44.624658Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 00:52:44.627296 waagent[1790]: 2025-09-13T00:52:44.627167Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 00:52:44.627560 waagent[1790]: 2025-09-13T00:52:44.627509Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:52:44.627913 waagent[1790]: 2025-09-13T00:52:44.627860Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:52:44.635545 waagent[1790]: 2025-09-13T00:52:44.635488Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:52:44.636030 waagent[1790]: 2025-09-13T00:52:44.635974Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 00:52:44.643537 waagent[1790]: 2025-09-13T00:52:44.643463Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1790 Sep 13 00:52:44.646516 waagent[1790]: 2025-09-13T00:52:44.646452Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:52:44.647268 waagent[1790]: 2025-09-13T00:52:44.647209Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 00:52:44.648203 waagent[1790]: 2025-09-13T00:52:44.648140Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 00:52:44.650646 waagent[1790]: 2025-09-13T00:52:44.650586Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 00:52:44.650970 waagent[1790]: 2025-09-13T00:52:44.650919Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 00:52:44.652253 waagent[1790]: 2025-09-13T00:52:44.652178Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:52:44.652776 waagent[1790]: 2025-09-13T00:52:44.652721Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:44.652948 waagent[1790]: 2025-09-13T00:52:44.652903Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:44.653506 waagent[1790]: 2025-09-13T00:52:44.653457Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:52:44.653935 waagent[1790]: 2025-09-13T00:52:44.653886Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:52:44.654621 waagent[1790]: 2025-09-13T00:52:44.654570Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:52:44.654830 waagent[1790]: 2025-09-13T00:52:44.654781Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:52:44.655056 waagent[1790]: 2025-09-13T00:52:44.654988Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:52:44.655056 waagent[1790]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:52:44.655056 waagent[1790]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:52:44.655056 waagent[1790]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:52:44.655056 waagent[1790]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:44.655056 waagent[1790]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:44.655056 waagent[1790]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:52:44.657688 waagent[1790]: 2025-09-13T00:52:44.657604Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:52:44.658247 waagent[1790]: 2025-09-13T00:52:44.658169Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:52:44.659088 waagent[1790]: 2025-09-13T00:52:44.659032Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:52:44.659279 waagent[1790]: 2025-09-13T00:52:44.659232Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:52:44.659427 waagent[1790]: 2025-09-13T00:52:44.659385Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:52:44.665114 waagent[1790]: 2025-09-13T00:52:44.664918Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:52:44.665429 waagent[1790]: 2025-09-13T00:52:44.665334Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:52:44.667928 waagent[1790]: 2025-09-13T00:52:44.667860Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:52:44.684987 waagent[1790]: 2025-09-13T00:52:44.684916Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 00:52:44.690255 waagent[1790]: 2025-09-13T00:52:44.690167Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 00:52:44.690525 waagent[1790]: 2025-09-13T00:52:44.690469Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:52:44.690525 waagent[1790]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:52:44.690525 waagent[1790]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:52:44.690525 waagent[1790]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fb:c9:b9 brd ff:ff:ff:ff:ff:ff Sep 13 00:52:44.690525 waagent[1790]: 3: enP45326s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fb:c9:b9 brd ff:ff:ff:ff:ff:ff\ altname enP45326p0s2 Sep 13 00:52:44.690525 waagent[1790]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:52:44.690525 waagent[1790]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:52:44.690525 waagent[1790]: 2: eth0 inet 10.200.4.17/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:52:44.690525 waagent[1790]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:52:44.690525 waagent[1790]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 00:52:44.690525 waagent[1790]: 2: eth0 inet6 fe80::6245:bdff:fefb:c9b9/64 scope link \ valid_lft forever preferred_lft forever Sep 13 00:52:44.713656 waagent[1790]: 2025-09-13T00:52:44.713532Z INFO ExtHandler ExtHandler Sep 13 00:52:44.715441 waagent[1790]: 2025-09-13T00:52:44.715383Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 58223c95-0c7b-4f13-b13b-c7ab9cbe979a correlation 714a35f6-1925-4405-a58d-07f5315da12a created: 2025-09-13T00:51:45.553635Z] Sep 13 00:52:44.718055 waagent[1790]: 2025-09-13T00:52:44.717998Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 00:52:44.719888 waagent[1790]: 2025-09-13T00:52:44.719834Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Sep 13 00:52:44.747551 waagent[1790]: 2025-09-13T00:52:44.747477Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 00:52:44.750264 waagent[1790]: 2025-09-13T00:52:44.750204Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 13 00:52:44.750264 waagent[1790]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.750264 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.750264 waagent[1790]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.750264 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.750264 waagent[1790]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.750264 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.750264 waagent[1790]: 120 16616 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:52:44.750264 waagent[1790]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:52:44.752307 waagent[1790]: 2025-09-13T00:52:44.752251Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1221C7EF-7AD6-45B6-B179-B1971D7B5403;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 00:52:44.795720 waagent[1790]: 2025-09-13T00:52:44.795614Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 13 00:52:44.795720 waagent[1790]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.795720 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.795720 waagent[1790]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.795720 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.795720 waagent[1790]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:52:44.795720 waagent[1790]: pkts bytes target prot opt in out source destination Sep 13 00:52:44.795720 waagent[1790]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:52:44.795720 waagent[1790]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:52:44.795720 waagent[1790]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:52:45.902686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:52:45.902935 systemd[1]: Stopped kubelet.service. Sep 13 00:52:45.904545 systemd[1]: Starting kubelet.service... Sep 13 00:52:46.373590 systemd[1]: Started kubelet.service. Sep 13 00:52:46.802234 kubelet[1844]: E0913 00:52:46.802108 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:46.803790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:46.803991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:52:57.054938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:52:57.055213 systemd[1]: Stopped kubelet.service. Sep 13 00:52:57.056825 systemd[1]: Starting kubelet.service... Sep 13 00:52:57.150820 systemd[1]: Started kubelet.service. Sep 13 00:52:57.904608 kubelet[1859]: E0913 00:52:57.904561 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:52:57.906054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:52:57.906269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:02.732065 systemd[1]: Created slice system-sshd.slice. Sep 13 00:53:02.733573 systemd[1]: Started sshd@0-10.200.4.17:22-10.200.16.10:47282.service. Sep 13 00:53:03.409239 sshd[1866]: Accepted publickey for core from 10.200.16.10 port 47282 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:03.410516 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:03.414610 systemd-logind[1540]: New session 3 of user core. Sep 13 00:53:03.415102 systemd[1]: Started session-3.scope. Sep 13 00:53:03.928068 systemd[1]: Started sshd@1-10.200.4.17:22-10.200.16.10:47298.service. Sep 13 00:53:04.525677 sshd[1871]: Accepted publickey for core from 10.200.16.10 port 47298 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:04.526995 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:04.531246 systemd-logind[1540]: New session 4 of user core. Sep 13 00:53:04.531575 systemd[1]: Started session-4.scope. Sep 13 00:53:04.953781 sshd[1871]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:04.956327 systemd[1]: sshd@1-10.200.4.17:22-10.200.16.10:47298.service: Deactivated successfully. Sep 13 00:53:04.957400 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:53:04.957492 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:53:04.958884 systemd-logind[1540]: Removed session 4. Sep 13 00:53:05.050477 systemd[1]: Started sshd@2-10.200.4.17:22-10.200.16.10:47308.service. Sep 13 00:53:05.649639 sshd[1878]: Accepted publickey for core from 10.200.16.10 port 47308 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:05.650903 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:05.655399 systemd[1]: Started session-5.scope. Sep 13 00:53:05.655802 systemd-logind[1540]: New session 5 of user core. Sep 13 00:53:06.076671 sshd[1878]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:06.079354 systemd[1]: sshd@2-10.200.4.17:22-10.200.16.10:47308.service: Deactivated successfully. Sep 13 00:53:06.080852 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:53:06.081381 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:53:06.082535 systemd-logind[1540]: Removed session 5. Sep 13 00:53:06.172969 systemd[1]: Started sshd@3-10.200.4.17:22-10.200.16.10:47324.service. Sep 13 00:53:06.612616 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 13 00:53:06.764053 sshd[1885]: Accepted publickey for core from 10.200.16.10 port 47324 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:06.765328 sshd[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:06.769790 systemd[1]: Started session-6.scope. Sep 13 00:53:06.770051 systemd-logind[1540]: New session 6 of user core. Sep 13 00:53:07.192954 sshd[1885]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:07.195537 systemd[1]: sshd@3-10.200.4.17:22-10.200.16.10:47324.service: Deactivated successfully. Sep 13 00:53:07.197013 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:53:07.197044 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:53:07.198085 systemd-logind[1540]: Removed session 6. Sep 13 00:53:07.310876 systemd[1]: Started sshd@4-10.200.4.17:22-10.200.16.10:47340.service. Sep 13 00:53:07.592752 update_engine[1541]: I0913 00:53:07.592692 1541 update_attempter.cc:509] Updating boot flags... Sep 13 00:53:07.903387 sshd[1892]: Accepted publickey for core from 10.200.16.10 port 47340 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:07.904698 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:07.908841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:53:07.909665 systemd[1]: Started session-7.scope. Sep 13 00:53:07.909829 systemd[1]: Stopped kubelet.service. Sep 13 00:53:07.911582 systemd[1]: Starting kubelet.service... Sep 13 00:53:07.915656 systemd-logind[1540]: New session 7 of user core. Sep 13 00:53:08.010665 systemd[1]: Started kubelet.service. Sep 13 00:53:08.663197 kubelet[1970]: E0913 00:53:08.663136 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:08.664500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:08.664740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:08.734765 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:53:08.735070 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:08.747386 dbus-daemon[1524]: \xd0]4\xfd\xedU: received setenforce notice (enforcing=-1678173392) Sep 13 00:53:08.749111 sudo[1976]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:08.848694 sshd[1892]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:08.851674 systemd[1]: sshd@4-10.200.4.17:22-10.200.16.10:47340.service: Deactivated successfully. Sep 13 00:53:08.853424 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:53:08.854330 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:53:08.855581 systemd-logind[1540]: Removed session 7. Sep 13 00:53:08.945558 systemd[1]: Started sshd@5-10.200.4.17:22-10.200.16.10:47342.service. Sep 13 00:53:09.536679 sshd[1981]: Accepted publickey for core from 10.200.16.10 port 47342 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:09.538992 sshd[1981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:09.543523 systemd[1]: Started session-8.scope. Sep 13 00:53:09.543755 systemd-logind[1540]: New session 8 of user core. Sep 13 00:53:09.863288 sudo[1986]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:53:09.863582 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:09.866243 sudo[1986]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:09.870516 sudo[1985]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:53:09.870798 sudo[1985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:09.879169 systemd[1]: Stopping audit-rules.service... Sep 13 00:53:09.879000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:53:09.884211 kernel: kauditd_printk_skb: 83 callbacks suppressed Sep 13 00:53:09.884294 kernel: audit: type=1305 audit(1757724789.879:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:53:09.884516 auditctl[1989]: No rules Sep 13 00:53:09.885092 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:53:09.885361 systemd[1]: Stopped audit-rules.service. Sep 13 00:53:09.887505 systemd[1]: Starting audit-rules.service... Sep 13 00:53:09.892861 kernel: audit: type=1300 audit(1757724789.879:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7862d660 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:09.879000 audit[1989]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7862d660 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:09.879000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:53:09.913886 kernel: audit: type=1327 audit(1757724789.879:163): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:53:09.913970 kernel: audit: type=1131 audit(1757724789.883:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.917600 augenrules[2007]: No rules Sep 13 00:53:09.918598 systemd[1]: Finished audit-rules.service. Sep 13 00:53:09.925560 sudo[1985]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:09.927207 kernel: audit: type=1130 audit(1757724789.917:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.937254 kernel: audit: type=1106 audit(1757724789.924:166): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.924000 audit[1985]: USER_END pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.924000 audit[1985]: CRED_DISP pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:09.961118 kernel: audit: type=1104 audit(1757724789.924:167): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.020573 sshd[1981]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:10.020000 audit[1981]: USER_END pid=1981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.023170 systemd[1]: sshd@5-10.200.4.17:22-10.200.16.10:47342.service: Deactivated successfully. Sep 13 00:53:10.024020 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:53:10.030276 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:53:10.031266 systemd-logind[1540]: Removed session 8. Sep 13 00:53:10.020000 audit[1981]: CRED_DISP pid=1981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.051601 kernel: audit: type=1106 audit(1757724790.020:168): pid=1981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.051668 kernel: audit: type=1104 audit(1757724790.020:169): pid=1981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.051690 kernel: audit: type=1131 audit(1757724790.020:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.17:22-10.200.16.10:47342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.17:22-10.200.16.10:47342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.116874 systemd[1]: Started sshd@6-10.200.4.17:22-10.200.16.10:41544.service. Sep 13 00:53:10.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:41544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:10.705000 audit[2014]: USER_ACCT pid=2014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.707343 sshd[2014]: Accepted publickey for core from 10.200.16.10 port 41544 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:53:10.707000 audit[2014]: CRED_ACQ pid=2014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.707000 audit[2014]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc87147f60 a2=3 a3=0 items=0 ppid=1 pid=2014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:10.707000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:53:10.708644 sshd[2014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:10.713027 systemd[1]: Started session-9.scope. Sep 13 00:53:10.713412 systemd-logind[1540]: New session 9 of user core. Sep 13 00:53:10.717000 audit[2014]: USER_START pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:10.718000 audit[2017]: CRED_ACQ pid=2017 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:11.030000 audit[2018]: USER_ACCT pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.030000 audit[2018]: CRED_REFR pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.031422 sudo[2018]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:53:11.032123 sudo[2018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:11.032000 audit[2018]: USER_START pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.057111 systemd[1]: Starting docker.service... Sep 13 00:53:11.097650 env[2028]: time="2025-09-13T00:53:11.097612553Z" level=info msg="Starting up" Sep 13 00:53:11.099052 env[2028]: time="2025-09-13T00:53:11.099020847Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:11.099052 env[2028]: time="2025-09-13T00:53:11.099040347Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:11.099176 env[2028]: time="2025-09-13T00:53:11.099063547Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:11.099176 env[2028]: time="2025-09-13T00:53:11.099076947Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:11.100681 env[2028]: time="2025-09-13T00:53:11.100666940Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:11.100752 env[2028]: time="2025-09-13T00:53:11.100743440Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:11.100796 env[2028]: time="2025-09-13T00:53:11.100787240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:11.100832 env[2028]: time="2025-09-13T00:53:11.100825139Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:11.109776 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1272704999-merged.mount: Deactivated successfully. Sep 13 00:53:11.225864 env[2028]: time="2025-09-13T00:53:11.225823812Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:53:11.225864 env[2028]: time="2025-09-13T00:53:11.225850412Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:53:11.226096 env[2028]: time="2025-09-13T00:53:11.226067311Z" level=info msg="Loading containers: start." Sep 13 00:53:11.256000 audit[2055]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.256000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc7f2ce990 a2=0 a3=7ffc7f2ce97c items=0 ppid=2028 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.256000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:53:11.258000 audit[2057]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.258000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc69d9b810 a2=0 a3=7ffc69d9b7fc items=0 ppid=2028 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:53:11.260000 audit[2059]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.260000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffc83d1e70 a2=0 a3=7fffc83d1e5c items=0 ppid=2028 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:11.262000 audit[2061]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.262000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd1313e540 a2=0 a3=7ffd1313e52c items=0 ppid=2028 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:11.264000 audit[2063]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.264000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeaa8b6670 a2=0 a3=7ffeaa8b665c items=0 ppid=2028 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.264000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:53:11.266000 audit[2065]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.266000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc70bc4c80 a2=0 a3=7ffc70bc4c6c items=0 ppid=2028 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:53:11.284000 audit[2067]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.284000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca7de2970 a2=0 a3=7ffca7de295c items=0 ppid=2028 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:53:11.287000 audit[2069]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.287000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd59206a10 a2=0 a3=7ffd592069fc items=0 ppid=2028 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.287000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:53:11.289000 audit[2071]: NETFILTER_CFG table=filter:17 family=2 entries=2 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.289000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc92c64a00 a2=0 a3=7ffc92c649ec items=0 ppid=2028 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.289000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:11.316000 audit[2075]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_unregister_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.316000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec26e1150 a2=0 a3=7ffec26e113c items=0 ppid=2028 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:11.320000 audit[2076]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.320000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff7bbb6fa0 a2=0 a3=7fff7bbb6f8c items=0 ppid=2028 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.320000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:11.346213 kernel: Initializing XFRM netlink socket Sep 13 00:53:11.359107 env[2028]: time="2025-09-13T00:53:11.359067850Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:53:11.391000 audit[2084]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.391000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffcf6c59510 a2=0 a3=7ffcf6c594fc items=0 ppid=2028 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.391000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:53:11.405000 audit[2087]: NETFILTER_CFG table=nat:21 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.405000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe2915dc50 a2=0 a3=7ffe2915dc3c items=0 ppid=2028 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:53:11.408000 audit[2090]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.408000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff80ef7a50 a2=0 a3=7fff80ef7a3c items=0 ppid=2028 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:11.410000 audit[2092]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.410000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc76830eb0 a2=0 a3=7ffc76830e9c items=0 ppid=2028 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.410000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:11.412000 audit[2094]: NETFILTER_CFG table=nat:24 family=2 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.412000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd479aee40 a2=0 a3=7ffd479aee2c items=0 ppid=2028 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:53:11.414000 audit[2096]: NETFILTER_CFG table=nat:25 family=2 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.414000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff6f08eae0 a2=0 a3=7fff6f08eacc items=0 ppid=2028 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.414000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:53:11.416000 audit[2098]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.416000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcb3e52380 a2=0 a3=7ffcb3e5236c items=0 ppid=2028 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.416000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:53:11.418000 audit[2100]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.418000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe022f4a50 a2=0 a3=7ffe022f4a3c items=0 ppid=2028 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:53:11.420000 audit[2102]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.420000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc49502a40 a2=0 a3=7ffc49502a2c items=0 ppid=2028 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:11.422000 audit[2104]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.422000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcd690a9c0 a2=0 a3=7ffcd690a9ac items=0 ppid=2028 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:11.424000 audit[2106]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.424000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff1811f150 a2=0 a3=7fff1811f13c items=0 ppid=2028 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:53:11.426029 systemd-networkd[1746]: docker0: Link UP Sep 13 00:53:11.443000 audit[2110]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_unregister_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.443000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb8d3bdd0 a2=0 a3=7fffb8d3bdbc items=0 ppid=2028 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:11.448000 audit[2111]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:11.448000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffee8f0e470 a2=0 a3=7ffee8f0e45c items=0 ppid=2028 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:11.448000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:11.450454 env[2028]: time="2025-09-13T00:53:11.450423265Z" level=info msg="Loading containers: done." Sep 13 00:53:11.480280 env[2028]: time="2025-09-13T00:53:11.480231239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:53:11.480481 env[2028]: time="2025-09-13T00:53:11.480458738Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:53:11.480596 env[2028]: time="2025-09-13T00:53:11.480574738Z" level=info msg="Daemon has completed initialization" Sep 13 00:53:11.522047 systemd[1]: Started docker.service. Sep 13 00:53:11.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:11.532575 env[2028]: time="2025-09-13T00:53:11.532523519Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:53:13.026449 env[1564]: time="2025-09-13T00:53:13.026408690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:53:14.073665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476769642.mount: Deactivated successfully. Sep 13 00:53:15.847150 env[1564]: time="2025-09-13T00:53:15.847101444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:15.853952 env[1564]: time="2025-09-13T00:53:15.853914321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:15.857544 env[1564]: time="2025-09-13T00:53:15.857512210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:15.861869 env[1564]: time="2025-09-13T00:53:15.861839596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:15.862488 env[1564]: time="2025-09-13T00:53:15.862458294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:53:15.863162 env[1564]: time="2025-09-13T00:53:15.863135791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:53:17.589381 env[1564]: time="2025-09-13T00:53:17.589301602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:17.596490 env[1564]: time="2025-09-13T00:53:17.596421782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:17.600960 env[1564]: time="2025-09-13T00:53:17.600929369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:17.605373 env[1564]: time="2025-09-13T00:53:17.605329357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:17.606377 env[1564]: time="2025-09-13T00:53:17.606337954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:53:17.607420 env[1564]: time="2025-09-13T00:53:17.607377951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:53:18.801624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:53:18.801873 systemd[1]: Stopped kubelet.service. Sep 13 00:53:18.803653 systemd[1]: Starting kubelet.service... Sep 13 00:53:18.818219 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 13 00:53:18.818308 kernel: audit: type=1130 audit(1757724798.800:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.836201 kernel: audit: type=1131 audit(1757724798.800:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.965329 kernel: audit: type=1130 audit(1757724798.948:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:18.949279 systemd[1]: Started kubelet.service. Sep 13 00:53:19.589930 kubelet[2155]: E0913 00:53:19.589882 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:19.591364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:19.591559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:19.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:19.606205 kernel: audit: type=1131 audit(1757724799.590:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:19.612540 env[1564]: time="2025-09-13T00:53:19.612502599Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:19.620532 env[1564]: time="2025-09-13T00:53:19.620475979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:19.627438 env[1564]: time="2025-09-13T00:53:19.627384762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:19.637937 env[1564]: time="2025-09-13T00:53:19.637909935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:19.638658 env[1564]: time="2025-09-13T00:53:19.638622034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:53:19.639262 env[1564]: time="2025-09-13T00:53:19.639238532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:53:21.002156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487751478.mount: Deactivated successfully. Sep 13 00:53:21.675430 env[1564]: time="2025-09-13T00:53:21.675380668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:21.681610 env[1564]: time="2025-09-13T00:53:21.681570055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:21.685244 env[1564]: time="2025-09-13T00:53:21.685207047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:21.688894 env[1564]: time="2025-09-13T00:53:21.688862439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:21.689240 env[1564]: time="2025-09-13T00:53:21.689212938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:53:21.689723 env[1564]: time="2025-09-13T00:53:21.689700137Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:53:22.378914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997733452.mount: Deactivated successfully. Sep 13 00:53:23.860009 env[1564]: time="2025-09-13T00:53:23.859959203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:23.871730 env[1564]: time="2025-09-13T00:53:23.871687480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:23.875481 env[1564]: time="2025-09-13T00:53:23.875446573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:23.881911 env[1564]: time="2025-09-13T00:53:23.881872460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:23.882725 env[1564]: time="2025-09-13T00:53:23.882695359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:53:23.883585 env[1564]: time="2025-09-13T00:53:23.883556657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:53:24.445725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777703307.mount: Deactivated successfully. Sep 13 00:53:24.463165 env[1564]: time="2025-09-13T00:53:24.463116686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:24.469268 env[1564]: time="2025-09-13T00:53:24.469232375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:24.473011 env[1564]: time="2025-09-13T00:53:24.472978068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:24.476857 env[1564]: time="2025-09-13T00:53:24.476827361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:24.477359 env[1564]: time="2025-09-13T00:53:24.477330360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:53:24.478124 env[1564]: time="2025-09-13T00:53:24.478099159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:53:25.159414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444415613.mount: Deactivated successfully. Sep 13 00:53:27.944711 env[1564]: time="2025-09-13T00:53:27.944649822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:27.951084 env[1564]: time="2025-09-13T00:53:27.951041425Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:27.954834 env[1564]: time="2025-09-13T00:53:27.954798768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:27.958540 env[1564]: time="2025-09-13T00:53:27.958509912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:27.959230 env[1564]: time="2025-09-13T00:53:27.959171802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:53:29.801608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 00:53:29.801864 systemd[1]: Stopped kubelet.service. Sep 13 00:53:29.803657 systemd[1]: Starting kubelet.service... Sep 13 00:53:29.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.820208 kernel: audit: type=1130 audit(1757724809.800:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.841182 kernel: audit: type=1131 audit(1757724809.800:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:30.088908 systemd[1]: Started kubelet.service. Sep 13 00:53:30.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:30.106216 kernel: audit: type=1130 audit(1757724810.088:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:30.135779 kubelet[2190]: E0913 00:53:30.135733 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:30.137223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:30.137429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:30.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:30.153215 kernel: audit: type=1131 audit(1757724810.136:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:31.321315 systemd[1]: Stopped kubelet.service. Sep 13 00:53:31.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.324373 systemd[1]: Starting kubelet.service... Sep 13 00:53:31.340331 kernel: audit: type=1130 audit(1757724811.320:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.363215 kernel: audit: type=1131 audit(1757724811.320:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.369663 systemd[1]: Reloading. Sep 13 00:53:31.447726 /usr/lib/systemd/system-generators/torcx-generator[2224]: time="2025-09-13T00:53:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:31.447764 /usr/lib/systemd/system-generators/torcx-generator[2224]: time="2025-09-13T00:53:31Z" level=info msg="torcx already run" Sep 13 00:53:31.556684 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:31.556707 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:31.576303 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:31.672639 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:53:31.672747 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:53:31.673072 systemd[1]: Stopped kubelet.service. Sep 13 00:53:31.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:31.681050 systemd[1]: Starting kubelet.service... Sep 13 00:53:31.687354 kernel: audit: type=1130 audit(1757724811.671:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:31.901141 systemd[1]: Started kubelet.service. Sep 13 00:53:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.921276 kernel: audit: type=1130 audit(1757724811.900:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.685685 kubelet[2301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:32.685685 kubelet[2301]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:32.685685 kubelet[2301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:32.685685 kubelet[2301]: I0913 00:53:32.685406 2301 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:32.930217 kubelet[2301]: I0913 00:53:32.929467 2301 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:32.930217 kubelet[2301]: I0913 00:53:32.929498 2301 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:32.930217 kubelet[2301]: I0913 00:53:32.929814 2301 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:32.953829 kubelet[2301]: E0913 00:53:32.953725 2301 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:32.954809 kubelet[2301]: I0913 00:53:32.954786 2301 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:32.960262 kubelet[2301]: E0913 00:53:32.960232 2301 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:32.960262 kubelet[2301]: I0913 00:53:32.960260 2301 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:32.964532 kubelet[2301]: I0913 00:53:32.964510 2301 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:32.964813 kubelet[2301]: I0913 00:53:32.964795 2301 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:32.964932 kubelet[2301]: I0913 00:53:32.964903 2301 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:32.965109 kubelet[2301]: I0913 00:53:32.964932 2301 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-1677b4f607","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:53:32.965244 kubelet[2301]: I0913 00:53:32.965122 2301 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:32.965244 kubelet[2301]: I0913 00:53:32.965136 2301 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:32.965323 kubelet[2301]: I0913 00:53:32.965263 2301 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:32.972741 kubelet[2301]: I0913 00:53:32.972704 2301 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:32.972839 kubelet[2301]: I0913 00:53:32.972745 2301 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:32.972839 kubelet[2301]: I0913 00:53:32.972786 2301 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:32.972839 kubelet[2301]: I0913 00:53:32.972806 2301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:32.973705 kubelet[2301]: W0913 00:53:32.973653 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-1677b4f607&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:32.973788 kubelet[2301]: E0913 00:53:32.973714 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-1677b4f607&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:32.980764 kubelet[2301]: W0913 00:53:32.980721 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:32.980902 kubelet[2301]: E0913 00:53:32.980884 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:32.981804 kubelet[2301]: I0913 00:53:32.981781 2301 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:32.982379 kubelet[2301]: I0913 00:53:32.982363 2301 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:32.982516 kubelet[2301]: W0913 00:53:32.982506 2301 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:53:32.987176 kubelet[2301]: I0913 00:53:32.987155 2301 server.go:1274] "Started kubelet" Sep 13 00:53:32.991381 kubelet[2301]: I0913 00:53:32.991346 2301 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:32.994976 kubelet[2301]: I0913 00:53:32.994936 2301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:32.995333 kubelet[2301]: I0913 00:53:32.995314 2301 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:32.996000 audit[2301]: AVC avc: denied { mac_admin } for pid=2301 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:33.000382 kubelet[2301]: I0913 00:53:33.000352 2301 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:53:33.000473 kubelet[2301]: I0913 00:53:33.000460 2301 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:53:33.000575 kubelet[2301]: I0913 00:53:33.000568 2301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:33.021163 kernel: audit: type=1400 audit(1757724812.996:217): avc: denied { mac_admin } for pid=2301 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:33.021272 kernel: audit: type=1401 audit(1757724812.996:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:32.996000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:32.996000 audit[2301]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00096b170 a1=c000974d80 a2=c00096b140 a3=25 items=0 ppid=1 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:32.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:32.996000 audit[2301]: AVC avc: denied { mac_admin } for pid=2301 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:32.996000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:32.996000 audit[2301]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009692e0 a1=c000974d98 a2=c00096b200 a3=25 items=0 ppid=1 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:32.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:33.021569 kubelet[2301]: E0913 00:53:32.999220 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-1677b4f607.1864b16792fce883 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-1677b4f607,UID:ci-3510.3.8-n-1677b4f607,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-1677b4f607,},FirstTimestamp:2025-09-13 00:53:32.987132035 +0000 UTC m=+1.079778634,LastTimestamp:2025-09-13 00:53:32.987132035 +0000 UTC m=+1.079778634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-1677b4f607,}" Sep 13 00:53:33.021569 kubelet[2301]: E0913 00:53:33.014655 2301 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:33.021569 kubelet[2301]: I0913 00:53:33.015137 2301 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:33.022868 kubelet[2301]: I0913 00:53:33.022844 2301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:33.024000 audit[2312]: NETFILTER_CFG table=mangle:33 family=2 entries=2 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.024000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd6be29cf0 a2=0 a3=7ffd6be29cdc items=0 ppid=2301 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:33.026547 kubelet[2301]: I0913 00:53:33.025855 2301 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:33.026547 kubelet[2301]: E0913 00:53:33.026151 2301 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:33.026655 kubelet[2301]: I0913 00:53:33.026552 2301 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:33.026655 kubelet[2301]: I0913 00:53:33.026602 2301 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:33.027372 kubelet[2301]: I0913 00:53:33.027353 2301 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:33.027449 kubelet[2301]: I0913 00:53:33.027432 2301 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:33.027686 kubelet[2301]: W0913 00:53:33.027635 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:33.027748 kubelet[2301]: E0913 00:53:33.027689 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:33.027786 kubelet[2301]: E0913 00:53:33.027754 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-1677b4f607?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="200ms" Sep 13 00:53:33.028906 kubelet[2301]: I0913 00:53:33.028887 2301 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:33.027000 audit[2313]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.027000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe89ed47f0 a2=0 a3=7ffe89ed47dc items=0 ppid=2301 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:53:33.031000 audit[2315]: NETFILTER_CFG table=filter:35 family=2 entries=2 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.031000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffccca47d10 a2=0 a3=7ffccca47cfc items=0 ppid=2301 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:33.039000 audit[2317]: NETFILTER_CFG table=filter:36 family=2 entries=2 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.039000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9f51e570 a2=0 a3=7ffd9f51e55c items=0 ppid=2301 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:33.059000 audit[2324]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.059000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff19c6af10 a2=0 a3=7fff19c6aefc items=0 ppid=2301 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:53:33.061250 kubelet[2301]: I0913 00:53:33.061153 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:33.061000 audit[2325]: NETFILTER_CFG table=mangle:38 family=10 entries=2 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:33.061000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc377cb8e0 a2=0 a3=7ffc377cb8cc items=0 ppid=2301 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.061000 audit[2326]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.061000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff355fe760 a2=0 a3=7fff355fe74c items=0 ppid=2301 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:33.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:33.063245 kubelet[2301]: I0913 00:53:33.063224 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:33.063371 kubelet[2301]: I0913 00:53:33.063357 2301 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:33.063415 kubelet[2301]: I0913 00:53:33.063386 2301 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:33.063464 kubelet[2301]: E0913 00:53:33.063438 2301 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:33.064083 kubelet[2301]: W0913 00:53:33.064047 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:33.064164 kubelet[2301]: E0913 00:53:33.064086 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:33.063000 audit[2328]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.063000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb3b0e480 a2=0 a3=7ffcb3b0e46c items=0 ppid=2301 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:33.064000 audit[2329]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:33.064000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6f3938b0 a2=0 a3=7ffe6f39389c items=0 ppid=2301 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:33.065000 audit[2330]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:33.065000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd14592450 a2=0 a3=7ffd1459243c items=0 ppid=2301 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:33.066000 audit[2331]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:33.066000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff299538e0 a2=0 a3=7fff299538cc items=0 ppid=2301 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:33.069750 kubelet[2301]: I0913 00:53:33.069728 2301 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:33.069750 kubelet[2301]: I0913 00:53:33.069747 2301 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:33.069868 kubelet[2301]: I0913 00:53:33.069764 2301 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:33.068000 audit[2332]: NETFILTER_CFG table=filter:44 family=10 entries=2 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:33.068000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb157a330 a2=0 a3=7ffdb157a31c items=0 ppid=2301 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:33.076746 kubelet[2301]: I0913 00:53:33.076724 2301 policy_none.go:49] "None policy: Start" Sep 13 00:53:33.077334 kubelet[2301]: I0913 00:53:33.077322 2301 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:33.077412 kubelet[2301]: I0913 00:53:33.077406 2301 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:33.085955 kubelet[2301]: I0913 00:53:33.085937 2301 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:33.084000 audit[2301]: AVC avc: denied { mac_admin } for pid=2301 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:33.084000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:33.084000 audit[2301]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e955f0 a1=c000e77188 a2=c000e955c0 a3=25 items=0 ppid=1 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:33.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:33.086258 kubelet[2301]: I0913 00:53:33.086247 2301 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:53:33.086385 kubelet[2301]: I0913 00:53:33.086378 2301 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:33.086442 kubelet[2301]: I0913 00:53:33.086420 2301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:33.087813 kubelet[2301]: I0913 00:53:33.087804 2301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:33.088707 kubelet[2301]: E0913 00:53:33.088689 2301 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:33.092100 kubelet[2301]: E0913 00:53:33.092034 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-1677b4f607.1864b16792fce883 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-1677b4f607,UID:ci-3510.3.8-n-1677b4f607,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-1677b4f607,},FirstTimestamp:2025-09-13 00:53:32.987132035 +0000 UTC m=+1.079778634,LastTimestamp:2025-09-13 00:53:32.987132035 +0000 UTC m=+1.079778634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-1677b4f607,}" Sep 13 00:53:33.188288 kubelet[2301]: I0913 00:53:33.188242 2301 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.188767 kubelet[2301]: E0913 00:53:33.188738 2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228307 kubelet[2301]: I0913 00:53:33.228201 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228538 kubelet[2301]: I0913 00:53:33.228516 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228642 kubelet[2301]: I0913 00:53:33.228628 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228729 kubelet[2301]: I0913 00:53:33.228717 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228825 kubelet[2301]: I0913 00:53:33.228812 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228912 kubelet[2301]: I0913 00:53:33.228900 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.228997 kubelet[2301]: I0913 00:53:33.228983 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.229087 kubelet[2301]: I0913 00:53:33.229073 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.229177 kubelet[2301]: I0913 00:53:33.229165 2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fbd75ce79f6f6572982faefb2e61ec5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-1677b4f607\" (UID: \"0fbd75ce79f6f6572982faefb2e61ec5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.230492 kubelet[2301]: E0913 00:53:33.230460 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-1677b4f607?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="400ms" Sep 13 00:53:33.390370 kubelet[2301]: I0913 00:53:33.390341 2301 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.390701 kubelet[2301]: E0913 00:53:33.390667 2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.472371 env[1564]: time="2025-09-13T00:53:33.472325108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-1677b4f607,Uid:72541654885b11547abd340a573dd473,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:33.473707 env[1564]: time="2025-09-13T00:53:33.473671791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-1677b4f607,Uid:bb26f85f50c66bffacd90ee2b2b6bbb9,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:33.476241 env[1564]: time="2025-09-13T00:53:33.476214158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-1677b4f607,Uid:0fbd75ce79f6f6572982faefb2e61ec5,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:33.631683 kubelet[2301]: E0913 00:53:33.631639 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-1677b4f607?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="800ms" Sep 13 00:53:33.792882 kubelet[2301]: I0913 00:53:33.792852 2301 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.793301 kubelet[2301]: E0913 00:53:33.793203 2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:33.880078 kubelet[2301]: W0913 00:53:33.880044 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:33.880245 kubelet[2301]: E0913 00:53:33.880089 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:33.953126 kubelet[2301]: W0913 00:53:33.953002 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:33.953126 kubelet[2301]: E0913 00:53:33.953073 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:34.087798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534400069.mount: Deactivated successfully. Sep 13 00:53:34.130326 env[1564]: time="2025-09-13T00:53:34.130275416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.133591 env[1564]: time="2025-09-13T00:53:34.133541076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.146971 env[1564]: time="2025-09-13T00:53:34.146924109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.150206 env[1564]: time="2025-09-13T00:53:34.150158868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.155562 env[1564]: time="2025-09-13T00:53:34.155520002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.161090 env[1564]: time="2025-09-13T00:53:34.161054732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.164050 env[1564]: time="2025-09-13T00:53:34.164018196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.167119 env[1564]: time="2025-09-13T00:53:34.167089057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.170159 env[1564]: time="2025-09-13T00:53:34.170127619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.173669 env[1564]: time="2025-09-13T00:53:34.173638375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.177366 env[1564]: time="2025-09-13T00:53:34.177335929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.181786 env[1564]: time="2025-09-13T00:53:34.181754374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:34.190630 kubelet[2301]: W0913 00:53:34.190595 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:34.190754 kubelet[2301]: E0913 00:53:34.190642 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:34.283553 env[1564]: time="2025-09-13T00:53:34.281300532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:34.283553 env[1564]: time="2025-09-13T00:53:34.281331332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:34.283553 env[1564]: time="2025-09-13T00:53:34.281341632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:34.283553 env[1564]: time="2025-09-13T00:53:34.281505130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58926356c4e728105a3a488cb4c61af31d92be404655f4c16f59dbf338509691 pid=2341 runtime=io.containerd.runc.v2 Sep 13 00:53:34.309425 env[1564]: time="2025-09-13T00:53:34.309362182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:34.309631 env[1564]: time="2025-09-13T00:53:34.309606079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:34.309730 env[1564]: time="2025-09-13T00:53:34.309709778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:34.309943 env[1564]: time="2025-09-13T00:53:34.309917775Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07a2e598025f9ff17ec582a399e96369a4405525760fd9b6f8e0b694c27f156a pid=2367 runtime=io.containerd.runc.v2 Sep 13 00:53:34.344266 env[1564]: time="2025-09-13T00:53:34.341545081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:34.344266 env[1564]: time="2025-09-13T00:53:34.341589880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:34.344266 env[1564]: time="2025-09-13T00:53:34.341606680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:34.344266 env[1564]: time="2025-09-13T00:53:34.341794078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9773cb6ccbda14aba18148407af16b710b07022927a253ba0eb750224bb0fd3 pid=2396 runtime=io.containerd.runc.v2 Sep 13 00:53:34.383764 env[1564]: time="2025-09-13T00:53:34.383712055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-1677b4f607,Uid:72541654885b11547abd340a573dd473,Namespace:kube-system,Attempt:0,} returns sandbox id \"58926356c4e728105a3a488cb4c61af31d92be404655f4c16f59dbf338509691\"" Sep 13 00:53:34.387462 env[1564]: time="2025-09-13T00:53:34.387432308Z" level=info msg="CreateContainer within sandbox \"58926356c4e728105a3a488cb4c61af31d92be404655f4c16f59dbf338509691\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:53:34.412098 env[1564]: time="2025-09-13T00:53:34.412047301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-1677b4f607,Uid:0fbd75ce79f6f6572982faefb2e61ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a2e598025f9ff17ec582a399e96369a4405525760fd9b6f8e0b694c27f156a\"" Sep 13 00:53:34.419212 env[1564]: time="2025-09-13T00:53:34.418749418Z" level=info msg="CreateContainer within sandbox \"07a2e598025f9ff17ec582a399e96369a4405525760fd9b6f8e0b694c27f156a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:53:34.433028 kubelet[2301]: E0913 00:53:34.432963 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-1677b4f607?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="1.6s" Sep 13 00:53:34.438692 kubelet[2301]: W0913 00:53:34.438583 2301 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-1677b4f607&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Sep 13 00:53:34.438692 kubelet[2301]: E0913 00:53:34.438657 2301 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-1677b4f607&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:34.455166 env[1564]: time="2025-09-13T00:53:34.455109564Z" level=info msg="CreateContainer within sandbox \"58926356c4e728105a3a488cb4c61af31d92be404655f4c16f59dbf338509691\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8de9753f35988d1c8fee7d4e637b9e69d887bd3d2ddf423c1be1585e98dbcd41\"" Sep 13 00:53:34.455494 env[1564]: time="2025-09-13T00:53:34.455460160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-1677b4f607,Uid:bb26f85f50c66bffacd90ee2b2b6bbb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9773cb6ccbda14aba18148407af16b710b07022927a253ba0eb750224bb0fd3\"" Sep 13 00:53:34.456081 env[1564]: time="2025-09-13T00:53:34.456050752Z" level=info msg="StartContainer for \"8de9753f35988d1c8fee7d4e637b9e69d887bd3d2ddf423c1be1585e98dbcd41\"" Sep 13 00:53:34.458292 env[1564]: time="2025-09-13T00:53:34.458257925Z" level=info msg="CreateContainer within sandbox \"f9773cb6ccbda14aba18148407af16b710b07022927a253ba0eb750224bb0fd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:53:34.489057 env[1564]: time="2025-09-13T00:53:34.488261650Z" level=info msg="CreateContainer within sandbox \"07a2e598025f9ff17ec582a399e96369a4405525760fd9b6f8e0b694c27f156a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c5b353988f16ce830a5667ca01942d0064ac31db39cadcbc2d95bc89de5ad94\"" Sep 13 00:53:34.489685 env[1564]: time="2025-09-13T00:53:34.489658833Z" level=info msg="StartContainer for \"7c5b353988f16ce830a5667ca01942d0064ac31db39cadcbc2d95bc89de5ad94\"" Sep 13 00:53:34.504709 env[1564]: time="2025-09-13T00:53:34.504665946Z" level=info msg="CreateContainer within sandbox \"f9773cb6ccbda14aba18148407af16b710b07022927a253ba0eb750224bb0fd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"571b24fa417261e93e8b58bfb55e51ad6f6481223285d67f766f63c192a267e3\"" Sep 13 00:53:34.505343 env[1564]: time="2025-09-13T00:53:34.505319338Z" level=info msg="StartContainer for \"571b24fa417261e93e8b58bfb55e51ad6f6481223285d67f766f63c192a267e3\"" Sep 13 00:53:34.553210 env[1564]: time="2025-09-13T00:53:34.552243352Z" level=info msg="StartContainer for \"8de9753f35988d1c8fee7d4e637b9e69d887bd3d2ddf423c1be1585e98dbcd41\" returns successfully" Sep 13 00:53:34.596322 kubelet[2301]: I0913 00:53:34.595825 2301 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:34.596322 kubelet[2301]: E0913 00:53:34.596291 2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:34.628909 env[1564]: time="2025-09-13T00:53:34.628854096Z" level=info msg="StartContainer for \"7c5b353988f16ce830a5667ca01942d0064ac31db39cadcbc2d95bc89de5ad94\" returns successfully" Sep 13 00:53:34.688005 env[1564]: time="2025-09-13T00:53:34.687938259Z" level=info msg="StartContainer for \"571b24fa417261e93e8b58bfb55e51ad6f6481223285d67f766f63c192a267e3\" returns successfully" Sep 13 00:53:36.199000 kubelet[2301]: I0913 00:53:36.198977 2301 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:36.858637 kubelet[2301]: E0913 00:53:36.858597 2301 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-1677b4f607\" not found" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:37.002060 kubelet[2301]: I0913 00:53:37.002030 2301 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:37.002305 kubelet[2301]: E0913 00:53:37.002288 2301 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-1677b4f607\": node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:37.027240 kubelet[2301]: E0913 00:53:37.027204 2301 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:37.128367 kubelet[2301]: E0913 00:53:37.128234 2301 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:37.979804 kubelet[2301]: I0913 00:53:37.979771 2301 apiserver.go:52] "Watching apiserver" Sep 13 00:53:38.027442 kubelet[2301]: I0913 00:53:38.027408 2301 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:38.117540 kubelet[2301]: W0913 00:53:38.117498 2301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:38.702819 kubelet[2301]: W0913 00:53:38.702781 2301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:39.114883 systemd[1]: Reloading. Sep 13 00:53:39.193097 /usr/lib/systemd/system-generators/torcx-generator[2591]: time="2025-09-13T00:53:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:39.193135 /usr/lib/systemd/system-generators/torcx-generator[2591]: time="2025-09-13T00:53:39Z" level=info msg="torcx already run" Sep 13 00:53:39.287472 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:39.287491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:39.302138 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:39.376475 kubelet[2301]: W0913 00:53:39.376385 2301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:39.419025 systemd[1]: Stopping kubelet.service... Sep 13 00:53:39.437603 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:39.438225 systemd[1]: Stopped kubelet.service. Sep 13 00:53:39.448210 kernel: kauditd_printk_skb: 46 callbacks suppressed Sep 13 00:53:39.448293 kernel: audit: type=1131 audit(1757724819.437:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.451924 systemd[1]: Starting kubelet.service... Sep 13 00:53:39.616646 systemd[1]: Started kubelet.service. Sep 13 00:53:39.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.634213 kernel: audit: type=1130 audit(1757724819.615:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.672288 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:39.672612 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:39.672650 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:39.672763 kubelet[2667]: I0913 00:53:39.672736 2667 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:39.678670 kubelet[2667]: I0913 00:53:39.678641 2667 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:39.678670 kubelet[2667]: I0913 00:53:39.678664 2667 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:39.678910 kubelet[2667]: I0913 00:53:39.678891 2667 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:39.680255 kubelet[2667]: I0913 00:53:39.680233 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:53:39.682420 kubelet[2667]: I0913 00:53:39.682401 2667 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:39.687976 kubelet[2667]: E0913 00:53:39.687937 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:39.687976 kubelet[2667]: I0913 00:53:39.687969 2667 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:40.220703 kernel: audit: type=1400 audit(1757724819.699:234): avc: denied { mac_admin } for pid=2667 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:40.220794 kernel: audit: type=1401 audit(1757724819.699:234): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:40.220820 kernel: audit: type=1300 audit(1757724819.699:234): arch=c000003e syscall=188 success=no exit=-22 a0=c000b4c630 a1=c000845638 a2=c000b4c600 a3=25 items=0 ppid=1 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:40.220848 kernel: audit: type=1327 audit(1757724819.699:234): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:40.220878 kernel: audit: type=1400 audit(1757724819.699:235): avc: denied { mac_admin } for pid=2667 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:40.220903 kernel: audit: type=1401 audit(1757724819.699:235): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:40.220928 kernel: audit: type=1300 audit(1757724819.699:235): arch=c000003e syscall=188 success=no exit=-22 a0=c000857ac0 a1=c000845650 a2=c000b4c6c0 a3=25 items=0 ppid=1 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:40.220953 kernel: audit: type=1327 audit(1757724819.699:235): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:39.699000 audit[2667]: AVC avc: denied { mac_admin } for pid=2667 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:39.699000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:39.699000 audit[2667]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b4c630 a1=c000845638 a2=c000b4c600 a3=25 items=0 ppid=1 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:39.699000 audit[2667]: AVC avc: denied { mac_admin } for pid=2667 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:39.699000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:39.699000 audit[2667]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000857ac0 a1=c000845650 a2=c000b4c6c0 a3=25 items=0 ppid=1 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:40.221474 kubelet[2667]: I0913 00:53:39.690862 2667 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:40.221474 kubelet[2667]: I0913 00:53:39.691229 2667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:40.221474 kubelet[2667]: I0913 00:53:39.691317 2667 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:40.221474 kubelet[2667]: I0913 00:53:39.691335 2667 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-1677b4f607","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691493 2667 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691502 2667 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691535 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691618 2667 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691627 2667 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691651 2667 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.691659 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.697560 2667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.697994 2667 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.698473 2667 server.go:1274] "Started kubelet" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.700837 2667 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.700881 2667 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.700908 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.707979 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.708862 2667 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:40.221696 kubelet[2667]: I0913 00:53:39.709709 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.715737 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:40.222183 kubelet[2667]: E0913 00:53:39.717790 2667 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-1677b4f607\" not found" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.717838 2667 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.726161 2667 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.753481 2667 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.753572 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.763270 2667 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.776954 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.778035 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.778050 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.778069 2667 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:40.222183 kubelet[2667]: E0913 00:53:39.778109 2667 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.841538 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.841553 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:40.222183 kubelet[2667]: I0913 00:53:39.841571 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:40.222183 kubelet[2667]: E0913 00:53:39.879001 2667 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:40.222696 kubelet[2667]: E0913 00:53:40.079441 2667 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:53:40.224697 kubelet[2667]: I0913 00:53:40.224679 2667 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:40.225045 kubelet[2667]: I0913 00:53:40.225020 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:53:40.225137 kubelet[2667]: I0913 00:53:40.225115 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:53:40.225234 kubelet[2667]: I0913 00:53:40.225198 2667 policy_none.go:49] "None policy: Start" Sep 13 00:53:40.227419 kubelet[2667]: I0913 00:53:40.227403 2667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:40.228824 kubelet[2667]: I0913 00:53:40.228537 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:40.229039 kubelet[2667]: I0913 00:53:40.229022 2667 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:40.229226 kubelet[2667]: I0913 00:53:40.229195 2667 state_mem.go:75] "Updated machine memory state" Sep 13 00:53:40.232403 kubelet[2667]: I0913 00:53:40.232381 2667 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:40.231000 audit[2667]: AVC avc: denied { mac_admin } for pid=2667 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:40.231000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:40.231000 audit[2667]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010f9200 a1=c0010f2a80 a2=c0010f91d0 a3=25 items=0 ppid=1 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:40.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:40.232724 kubelet[2667]: I0913 00:53:40.232456 2667 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:53:40.232724 kubelet[2667]: I0913 00:53:40.232588 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:40.232922 kubelet[2667]: I0913 00:53:40.232874 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:40.238662 kubelet[2667]: I0913 00:53:40.238634 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:40.360019 kubelet[2667]: I0913 00:53:40.359997 2667 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.370687 kubelet[2667]: I0913 00:53:40.370667 2667 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.370871 kubelet[2667]: I0913 00:53:40.370852 2667 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.493417 kubelet[2667]: W0913 00:53:40.492595 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:40.493417 kubelet[2667]: E0913 00:53:40.492660 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-1677b4f607\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.493417 kubelet[2667]: W0913 00:53:40.492793 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:40.493417 kubelet[2667]: W0913 00:53:40.492850 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:53:40.493417 kubelet[2667]: E0913 00:53:40.492881 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.493417 kubelet[2667]: E0913 00:53:40.492939 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626795 kubelet[2667]: I0913 00:53:40.626757 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626960 kubelet[2667]: I0913 00:53:40.626803 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626960 kubelet[2667]: I0913 00:53:40.626836 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626960 kubelet[2667]: I0913 00:53:40.626858 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626960 kubelet[2667]: I0913 00:53:40.626882 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.626960 kubelet[2667]: I0913 00:53:40.626905 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb26f85f50c66bffacd90ee2b2b6bbb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-1677b4f607\" (UID: \"bb26f85f50c66bffacd90ee2b2b6bbb9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.627155 kubelet[2667]: I0913 00:53:40.626935 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fbd75ce79f6f6572982faefb2e61ec5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-1677b4f607\" (UID: \"0fbd75ce79f6f6572982faefb2e61ec5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.627155 kubelet[2667]: I0913 00:53:40.626956 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.627155 kubelet[2667]: I0913 00:53:40.626983 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72541654885b11547abd340a573dd473-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-1677b4f607\" (UID: \"72541654885b11547abd340a573dd473\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" Sep 13 00:53:40.693839 kubelet[2667]: I0913 00:53:40.693523 2667 apiserver.go:52] "Watching apiserver" Sep 13 00:53:40.727207 kubelet[2667]: I0913 00:53:40.727164 2667 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:53:40.834015 kubelet[2667]: I0913 00:53:40.833949 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-1677b4f607" podStartSLOduration=1.8339302640000001 podStartE2EDuration="1.833930264s" podCreationTimestamp="2025-09-13 00:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:40.822858682 +0000 UTC m=+1.198912070" watchObservedRunningTime="2025-09-13 00:53:40.833930264 +0000 UTC m=+1.209983552" Sep 13 00:53:40.845553 kubelet[2667]: I0913 00:53:40.845491 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-1677b4f607" podStartSLOduration=2.845473442 podStartE2EDuration="2.845473442s" podCreationTimestamp="2025-09-13 00:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:40.834495158 +0000 UTC m=+1.210548546" watchObservedRunningTime="2025-09-13 00:53:40.845473442 +0000 UTC m=+1.221526730" Sep 13 00:53:40.856798 kubelet[2667]: I0913 00:53:40.855498 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-1677b4f607" podStartSLOduration=2.855470036 podStartE2EDuration="2.855470036s" podCreationTimestamp="2025-09-13 00:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:40.846048936 +0000 UTC m=+1.222102224" watchObservedRunningTime="2025-09-13 00:53:40.855470036 +0000 UTC m=+1.231523324" Sep 13 00:53:43.973680 kubelet[2667]: I0913 00:53:43.973536 2667 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:53:43.974226 kubelet[2667]: I0913 00:53:43.974208 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:53:43.974289 env[1564]: time="2025-09-13T00:53:43.973917850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:53:44.648092 kubelet[2667]: I0913 00:53:44.648010 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f65fbd99-9197-443f-842d-a8913bd30abd-kube-proxy\") pod \"kube-proxy-7qkj6\" (UID: \"f65fbd99-9197-443f-842d-a8913bd30abd\") " pod="kube-system/kube-proxy-7qkj6" Sep 13 00:53:44.648092 kubelet[2667]: I0913 00:53:44.648056 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f65fbd99-9197-443f-842d-a8913bd30abd-lib-modules\") pod \"kube-proxy-7qkj6\" (UID: \"f65fbd99-9197-443f-842d-a8913bd30abd\") " pod="kube-system/kube-proxy-7qkj6" Sep 13 00:53:44.648092 kubelet[2667]: I0913 00:53:44.648085 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2wt2\" (UniqueName: \"kubernetes.io/projected/f65fbd99-9197-443f-842d-a8913bd30abd-kube-api-access-w2wt2\") pod \"kube-proxy-7qkj6\" (UID: \"f65fbd99-9197-443f-842d-a8913bd30abd\") " pod="kube-system/kube-proxy-7qkj6" Sep 13 00:53:44.648092 kubelet[2667]: I0913 00:53:44.648109 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f65fbd99-9197-443f-842d-a8913bd30abd-xtables-lock\") pod \"kube-proxy-7qkj6\" (UID: \"f65fbd99-9197-443f-842d-a8913bd30abd\") " pod="kube-system/kube-proxy-7qkj6" Sep 13 00:53:44.753544 kubelet[2667]: I0913 00:53:44.753508 2667 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:53:44.784426 env[1564]: time="2025-09-13T00:53:44.784387101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qkj6,Uid:f65fbd99-9197-443f-842d-a8913bd30abd,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:44.838138 env[1564]: time="2025-09-13T00:53:44.833149335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:44.838138 env[1564]: time="2025-09-13T00:53:44.837647992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:44.838138 env[1564]: time="2025-09-13T00:53:44.837741591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:44.838138 env[1564]: time="2025-09-13T00:53:44.838435784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf71855c3a84c00e09236ca564ca6390f70d0ee3e7a0136464259c7f356fb93 pid=2723 runtime=io.containerd.runc.v2 Sep 13 00:53:44.911641 env[1564]: time="2025-09-13T00:53:44.911548986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qkj6,Uid:f65fbd99-9197-443f-842d-a8913bd30abd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cf71855c3a84c00e09236ca564ca6390f70d0ee3e7a0136464259c7f356fb93\"" Sep 13 00:53:44.915373 env[1564]: time="2025-09-13T00:53:44.915330150Z" level=info msg="CreateContainer within sandbox \"4cf71855c3a84c00e09236ca564ca6390f70d0ee3e7a0136464259c7f356fb93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:53:44.949514 env[1564]: time="2025-09-13T00:53:44.949467024Z" level=info msg="CreateContainer within sandbox \"4cf71855c3a84c00e09236ca564ca6390f70d0ee3e7a0136464259c7f356fb93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c903820a84b6bafb13b3d74826e832692dd8f447774bddfbb9516fda8e49a3f\"" Sep 13 00:53:44.950885 env[1564]: time="2025-09-13T00:53:44.950208817Z" level=info msg="StartContainer for \"2c903820a84b6bafb13b3d74826e832692dd8f447774bddfbb9516fda8e49a3f\"" Sep 13 00:53:45.032670 env[1564]: time="2025-09-13T00:53:45.032605137Z" level=info msg="StartContainer for \"2c903820a84b6bafb13b3d74826e832692dd8f447774bddfbb9516fda8e49a3f\" returns successfully" Sep 13 00:53:45.175000 audit[2826]: NETFILTER_CFG table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2826 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.181558 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:53:45.181677 kernel: audit: type=1325 audit(1757724825.175:237): table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2826 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.180000 audit[2827]: NETFILTER_CFG table=mangle:46 family=2 entries=1 op=nft_register_chain pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.204448 kernel: audit: type=1325 audit(1757724825.180:238): table=mangle:46 family=2 entries=1 op=nft_register_chain pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.180000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1a736f50 a2=0 a3=7ffc1a736f3c items=0 ppid=2776 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.226385 kernel: audit: type=1300 audit(1757724825.180:238): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1a736f50 a2=0 a3=7ffc1a736f3c items=0 ppid=2776 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.226480 kernel: audit: type=1327 audit(1757724825.180:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:45.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:45.185000 audit[2828]: NETFILTER_CFG table=nat:47 family=2 entries=1 op=nft_register_chain pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.249549 kernel: audit: type=1325 audit(1757724825.185:239): table=nat:47 family=2 entries=1 op=nft_register_chain pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.249630 kernel: audit: type=1300 audit(1757724825.185:239): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6509e570 a2=0 a3=7ffd6509e55c items=0 ppid=2776 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.185000 audit[2828]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6509e570 a2=0 a3=7ffd6509e55c items=0 ppid=2776 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.252065 kubelet[2667]: I0913 00:53:45.251982 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3-var-lib-calico\") pod \"tigera-operator-58fc44c59b-2chfv\" (UID: \"a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3\") " pod="tigera-operator/tigera-operator-58fc44c59b-2chfv" Sep 13 00:53:45.252065 kubelet[2667]: I0913 00:53:45.252018 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6spf\" (UniqueName: \"kubernetes.io/projected/a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3-kube-api-access-k6spf\") pod \"tigera-operator-58fc44c59b-2chfv\" (UID: \"a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3\") " pod="tigera-operator/tigera-operator-58fc44c59b-2chfv" Sep 13 00:53:45.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:45.282112 kernel: audit: type=1327 audit(1757724825.185:239): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:45.185000 audit[2829]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.321811 kernel: audit: type=1325 audit(1757724825.185:240): table=filter:48 family=2 entries=1 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.321948 kernel: audit: type=1300 audit(1757724825.185:240): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc71d01c10 a2=0 a3=7ffc71d01bfc items=0 ppid=2776 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.185000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc71d01c10 a2=0 a3=7ffc71d01bfc items=0 ppid=2776 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:45.175000 audit[2826]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffaa361590 a2=0 a3=7fffaa36157c items=0 ppid=2776 pid=2826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:53:45.185000 audit[2830]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.185000 audit[2830]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe940fa970 a2=0 a3=7ffe940fa95c items=0 ppid=2776 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:53:45.190000 audit[2831]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.190000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe98a18c0 a2=0 a3=7fffe98a18ac items=0 ppid=2776 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:45.281000 audit[2832]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.281000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda55bc6e0 a2=0 a3=7ffda55bc6cc items=0 ppid=2776 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:53:45.281000 audit[2834]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.281000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdde903c50 a2=0 a3=7ffdde903c3c items=0 ppid=2776 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:53:45.335335 kernel: audit: type=1327 audit(1757724825.185:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:53:45.287000 audit[2837]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.287000 audit[2837]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff58eaa610 a2=0 a3=7fff58eaa5fc items=0 ppid=2776 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:53:45.287000 audit[2838]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.287000 audit[2838]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0d55f380 a2=0 a3=7ffc0d55f36c items=0 ppid=2776 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:53:45.292000 audit[2840]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.292000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9a565090 a2=0 a3=7ffd9a56507c items=0 ppid=2776 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:53:45.295000 audit[2841]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.295000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe93473550 a2=0 a3=7ffe9347353c items=0 ppid=2776 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:53:45.295000 audit[2843]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.295000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce4579830 a2=0 a3=7ffce457981c items=0 ppid=2776 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:53:45.301000 audit[2846]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.301000 audit[2846]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc01829050 a2=0 a3=7ffc0182903c items=0 ppid=2776 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:53:45.301000 audit[2847]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.301000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d2d77b0 a2=0 a3=7ffc6d2d779c items=0 ppid=2776 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:53:45.306000 audit[2849]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.306000 audit[2849]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff530a5760 a2=0 a3=7fff530a574c items=0 ppid=2776 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:53:45.306000 audit[2850]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=2850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.306000 audit[2850]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffffd2d6a0 a2=0 a3=7fffffd2d68c items=0 ppid=2776 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:53:45.311000 audit[2852]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.311000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd978ccdf0 a2=0 a3=7ffd978ccddc items=0 ppid=2776 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:45.316000 audit[2855]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_rule pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.316000 audit[2855]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd6f08720 a2=0 a3=7ffdd6f0870c items=0 ppid=2776 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:45.321000 audit[2858]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.321000 audit[2858]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe69b03190 a2=0 a3=7ffe69b0317c items=0 ppid=2776 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:53:45.335000 audit[2859]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_chain pid=2859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.335000 audit[2859]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb4209eb0 a2=0 a3=7ffeb4209e9c items=0 ppid=2776 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:53:45.338000 audit[2861]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_rule pid=2861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.338000 audit[2861]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd4787bfb0 a2=0 a3=7ffd4787bf9c items=0 ppid=2776 pid=2861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:45.342000 audit[2864]: NETFILTER_CFG table=nat:67 family=2 entries=1 op=nft_register_rule pid=2864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.342000 audit[2864]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5c449030 a2=0 a3=7ffd5c44901c items=0 ppid=2776 pid=2864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:45.343000 audit[2865]: NETFILTER_CFG table=nat:68 family=2 entries=1 op=nft_register_chain pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.343000 audit[2865]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe7175b00 a2=0 a3=7fffe7175aec items=0 ppid=2776 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:53:45.345000 audit[2867]: NETFILTER_CFG table=nat:69 family=2 entries=1 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:45.345000 audit[2867]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd9b75b5f0 a2=0 a3=7ffd9b75b5dc items=0 ppid=2776 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:53:45.391000 audit[2873]: NETFILTER_CFG table=filter:70 family=2 entries=8 op=nft_register_rule pid=2873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:45.391000 audit[2873]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdb857610 a2=0 a3=7fffdb8575fc items=0 ppid=2776 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:45.403000 audit[2873]: NETFILTER_CFG table=nat:71 family=2 entries=14 op=nft_register_chain pid=2873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:45.403000 audit[2873]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffdb857610 a2=0 a3=7fffdb8575fc items=0 ppid=2776 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:45.405000 audit[2879]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.405000 audit[2879]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe41d82a40 a2=0 a3=7ffe41d82a2c items=0 ppid=2776 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:53:45.408000 audit[2881]: NETFILTER_CFG table=filter:73 family=10 entries=2 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.408000 audit[2881]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcf72e6c90 a2=0 a3=7ffcf72e6c7c items=0 ppid=2776 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:53:45.411000 audit[2884]: NETFILTER_CFG table=filter:74 family=10 entries=2 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.411000 audit[2884]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb121c9d0 a2=0 a3=7ffcb121c9bc items=0 ppid=2776 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:53:45.413000 audit[2885]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.413000 audit[2885]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff496129a0 a2=0 a3=7fff4961298c items=0 ppid=2776 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:53:45.415000 audit[2887]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.415000 audit[2887]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe836a6870 a2=0 a3=7ffe836a685c items=0 ppid=2776 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:53:45.416000 audit[2888]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.416000 audit[2888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa06c02a0 a2=0 a3=7fffa06c028c items=0 ppid=2776 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:53:45.419000 audit[2890]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.419000 audit[2890]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff9e819d10 a2=0 a3=7fff9e819cfc items=0 ppid=2776 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:53:45.424590 env[1564]: time="2025-09-13T00:53:45.424123293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2chfv,Uid:a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:53:45.422000 audit[2893]: NETFILTER_CFG table=filter:79 family=10 entries=2 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.422000 audit[2893]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc117f8270 a2=0 a3=7ffc117f825c items=0 ppid=2776 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:53:45.424000 audit[2894]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_chain pid=2894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.424000 audit[2894]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe84d03520 a2=0 a3=7ffe84d0350c items=0 ppid=2776 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.424000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:53:45.428000 audit[2896]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.428000 audit[2896]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff4c6c130 a2=0 a3=7ffff4c6c11c items=0 ppid=2776 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:53:45.429000 audit[2897]: NETFILTER_CFG table=filter:82 family=10 entries=1 op=nft_register_chain pid=2897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.429000 audit[2897]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbd7bc7f0 a2=0 a3=7ffdbd7bc7dc items=0 ppid=2776 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:53:45.431000 audit[2899]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.431000 audit[2899]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff09ea9680 a2=0 a3=7fff09ea966c items=0 ppid=2776 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:53:45.437000 audit[2902]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_rule pid=2902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.437000 audit[2902]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa0696ba0 a2=0 a3=7fffa0696b8c items=0 ppid=2776 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:53:45.440000 audit[2905]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.440000 audit[2905]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd55acf070 a2=0 a3=7ffd55acf05c items=0 ppid=2776 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:53:45.442000 audit[2906]: NETFILTER_CFG table=nat:86 family=10 entries=1 op=nft_register_chain pid=2906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.442000 audit[2906]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3248e6b0 a2=0 a3=7fff3248e69c items=0 ppid=2776 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.442000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:53:45.444000 audit[2908]: NETFILTER_CFG table=nat:87 family=10 entries=2 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.444000 audit[2908]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd7177f7d0 a2=0 a3=7ffd7177f7bc items=0 ppid=2776 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:45.448000 audit[2911]: NETFILTER_CFG table=nat:88 family=10 entries=2 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.448000 audit[2911]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcc638d490 a2=0 a3=7ffcc638d47c items=0 ppid=2776 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:53:45.449000 audit[2912]: NETFILTER_CFG table=nat:89 family=10 entries=1 op=nft_register_chain pid=2912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.449000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe441324a0 a2=0 a3=7ffe4413248c items=0 ppid=2776 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:53:45.451000 audit[2914]: NETFILTER_CFG table=nat:90 family=10 entries=2 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.451000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff6f925f60 a2=0 a3=7fff6f925f4c items=0 ppid=2776 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:53:45.452000 audit[2915]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.452000 audit[2915]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedfb42990 a2=0 a3=7ffedfb4297c items=0 ppid=2776 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:53:45.455000 audit[2917]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.455000 audit[2917]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff48061740 a2=0 a3=7fff4806172c items=0 ppid=2776 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:45.458000 audit[2920]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:45.458000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffbbb804a0 a2=0 a3=7fffbbb8048c items=0 ppid=2776 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:45.462000 audit[2922]: NETFILTER_CFG table=filter:94 family=10 entries=3 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:53:45.462000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd8e49d5e0 a2=0 a3=7ffd8e49d5cc items=0 ppid=2776 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.462000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:45.462000 audit[2922]: NETFILTER_CFG table=nat:95 family=10 entries=7 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:53:45.462000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd8e49d5e0 a2=0 a3=7ffd8e49d5cc items=0 ppid=2776 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:45.462000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:45.471013 env[1564]: time="2025-09-13T00:53:45.470946557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:45.471157 env[1564]: time="2025-09-13T00:53:45.470989757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:45.471157 env[1564]: time="2025-09-13T00:53:45.471005057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:45.471292 env[1564]: time="2025-09-13T00:53:45.471150155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e784c65cfb103f82c089cb8a2379102d73c500ce0f7c1a07572331776392b38e pid=2931 runtime=io.containerd.runc.v2 Sep 13 00:53:45.523287 env[1564]: time="2025-09-13T00:53:45.523236370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2chfv,Uid:a6daaea9-bc02-46cc-bb1d-18f0d5a7a5f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e784c65cfb103f82c089cb8a2379102d73c500ce0f7c1a07572331776392b38e\"" Sep 13 00:53:45.526027 env[1564]: time="2025-09-13T00:53:45.525553049Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:53:45.764027 systemd[1]: run-containerd-runc-k8s.io-4cf71855c3a84c00e09236ca564ca6390f70d0ee3e7a0136464259c7f356fb93-runc.7sBVT1.mount: Deactivated successfully. Sep 13 00:53:47.132567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174924964.mount: Deactivated successfully. Sep 13 00:53:48.854337 env[1564]: time="2025-09-13T00:53:48.854290853Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.860521 env[1564]: time="2025-09-13T00:53:48.860481100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.863977 env[1564]: time="2025-09-13T00:53:48.863946170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.869217 env[1564]: time="2025-09-13T00:53:48.869167825Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.869687 env[1564]: time="2025-09-13T00:53:48.869658621Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:53:48.872210 env[1564]: time="2025-09-13T00:53:48.872167999Z" level=info msg="CreateContainer within sandbox \"e784c65cfb103f82c089cb8a2379102d73c500ce0f7c1a07572331776392b38e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:53:48.898489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209533590.mount: Deactivated successfully. Sep 13 00:53:48.911962 env[1564]: time="2025-09-13T00:53:48.911921457Z" level=info msg="CreateContainer within sandbox \"e784c65cfb103f82c089cb8a2379102d73c500ce0f7c1a07572331776392b38e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6948df44e07f3d05a527b80e7e6697d1a90062842b88d4f374c1949445698d46\"" Sep 13 00:53:48.913250 env[1564]: time="2025-09-13T00:53:48.913224045Z" level=info msg="StartContainer for \"6948df44e07f3d05a527b80e7e6697d1a90062842b88d4f374c1949445698d46\"" Sep 13 00:53:48.969226 env[1564]: time="2025-09-13T00:53:48.969154663Z" level=info msg="StartContainer for \"6948df44e07f3d05a527b80e7e6697d1a90062842b88d4f374c1949445698d46\" returns successfully" Sep 13 00:53:49.851742 kubelet[2667]: I0913 00:53:49.850653 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-2chfv" podStartSLOduration=1.5042829960000001 podStartE2EDuration="4.850633448s" podCreationTimestamp="2025-09-13 00:53:45 +0000 UTC" firstStartedPulling="2025-09-13 00:53:45.52433826 +0000 UTC m=+5.900391548" lastFinishedPulling="2025-09-13 00:53:48.870688612 +0000 UTC m=+9.246742000" observedRunningTime="2025-09-13 00:53:49.850503249 +0000 UTC m=+10.226556537" watchObservedRunningTime="2025-09-13 00:53:49.850633448 +0000 UTC m=+10.226686836" Sep 13 00:53:49.851742 kubelet[2667]: I0913 00:53:49.850971 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7qkj6" podStartSLOduration=5.850959245 podStartE2EDuration="5.850959245s" podCreationTimestamp="2025-09-13 00:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:45.856030973 +0000 UTC m=+6.232084261" watchObservedRunningTime="2025-09-13 00:53:49.850959245 +0000 UTC m=+10.227012533" Sep 13 00:53:55.243035 sudo[2018]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:55.241000 audit[2018]: USER_END pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.248073 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:53:55.248148 kernel: audit: type=1106 audit(1757724835.241:288): pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.247000 audit[2018]: CRED_DISP pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.290208 kernel: audit: type=1104 audit(1757724835.247:289): pid=2018 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.370918 sshd[2014]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:55.371000 audit[2014]: USER_END pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:55.374595 systemd[1]: sshd@6-10.200.4.17:22-10.200.16.10:41544.service: Deactivated successfully. Sep 13 00:53:55.376050 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:53:55.376585 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:53:55.377657 systemd-logind[1540]: Removed session 9. Sep 13 00:53:55.397205 kernel: audit: type=1106 audit(1757724835.371:290): pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:55.371000 audit[2014]: CRED_DISP pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:55.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:41544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.463023 kernel: audit: type=1104 audit(1757724835.371:291): pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:53:55.463158 kernel: audit: type=1131 audit(1757724835.372:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:41544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:57.037000 audit[3048]: NETFILTER_CFG table=filter:96 family=2 entries=15 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.053200 kernel: audit: type=1325 audit(1757724837.037:293): table=filter:96 family=2 entries=15 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.037000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb5dd5a70 a2=0 a3=7ffcb5dd5a5c items=0 ppid=2776 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.091297 kernel: audit: type=1300 audit(1757724837.037:293): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb5dd5a70 a2=0 a3=7ffcb5dd5a5c items=0 ppid=2776 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:57.135206 kernel: audit: type=1327 audit(1757724837.037:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:57.063000 audit[3048]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.160217 kernel: audit: type=1325 audit(1757724837.063:294): table=nat:97 family=2 entries=12 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.063000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb5dd5a70 a2=0 a3=0 items=0 ppid=2776 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:57.160000 audit[3050]: NETFILTER_CFG table=filter:98 family=2 entries=16 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.160000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb477a650 a2=0 a3=7fffb477a63c items=0 ppid=2776 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:57.184206 kernel: audit: type=1300 audit(1757724837.063:294): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb5dd5a70 a2=0 a3=0 items=0 ppid=2776 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.187000 audit[3050]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:57.187000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb477a650 a2=0 a3=0 items=0 ppid=2776 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:57.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:59.548000 audit[3052]: NETFILTER_CFG table=filter:100 family=2 entries=17 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:59.548000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd51957b60 a2=0 a3=7ffd51957b4c items=0 ppid=2776 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:59.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:59.553000 audit[3052]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:59.553000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd51957b60 a2=0 a3=0 items=0 ppid=2776 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:59.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:59.581000 audit[3054]: NETFILTER_CFG table=filter:102 family=2 entries=18 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:59.581000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcd62e00d0 a2=0 a3=7ffcd62e00bc items=0 ppid=2776 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:59.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:53:59.586000 audit[3054]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:53:59.586000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd62e00d0 a2=0 a3=0 items=0 ppid=2776 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:59.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:00.045127 kubelet[2667]: I0913 00:54:00.045090 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abbee19-bba1-4425-8f7e-dc9a73479417-tigera-ca-bundle\") pod \"calico-typha-784948978f-ch8mh\" (UID: \"0abbee19-bba1-4425-8f7e-dc9a73479417\") " pod="calico-system/calico-typha-784948978f-ch8mh" Sep 13 00:54:00.045697 kubelet[2667]: I0913 00:54:00.045675 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0abbee19-bba1-4425-8f7e-dc9a73479417-typha-certs\") pod \"calico-typha-784948978f-ch8mh\" (UID: \"0abbee19-bba1-4425-8f7e-dc9a73479417\") " pod="calico-system/calico-typha-784948978f-ch8mh" Sep 13 00:54:00.045843 kubelet[2667]: I0913 00:54:00.045827 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2d7\" (UniqueName: \"kubernetes.io/projected/0abbee19-bba1-4425-8f7e-dc9a73479417-kube-api-access-hl2d7\") pod \"calico-typha-784948978f-ch8mh\" (UID: \"0abbee19-bba1-4425-8f7e-dc9a73479417\") " pod="calico-system/calico-typha-784948978f-ch8mh" Sep 13 00:54:00.303290 env[1564]: time="2025-09-13T00:54:00.302781053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784948978f-ch8mh,Uid:0abbee19-bba1-4425-8f7e-dc9a73479417,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:00.355001 kubelet[2667]: I0913 00:54:00.354963 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c31fdf0-d773-478e-b8dd-41a9a84039eb-tigera-ca-bundle\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355214 kubelet[2667]: I0913 00:54:00.355176 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6vm\" (UniqueName: \"kubernetes.io/projected/1c31fdf0-d773-478e-b8dd-41a9a84039eb-kube-api-access-6p6vm\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355349 kubelet[2667]: I0913 00:54:00.355335 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c31fdf0-d773-478e-b8dd-41a9a84039eb-node-certs\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355451 kubelet[2667]: I0913 00:54:00.355439 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-xtables-lock\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355537 kubelet[2667]: I0913 00:54:00.355525 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-cni-bin-dir\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355620 kubelet[2667]: I0913 00:54:00.355609 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-cni-net-dir\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355694 kubelet[2667]: I0913 00:54:00.355682 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-var-run-calico\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355784 kubelet[2667]: I0913 00:54:00.355771 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-flexvol-driver-host\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355867 kubelet[2667]: I0913 00:54:00.355854 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-lib-modules\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.355950 kubelet[2667]: I0913 00:54:00.355939 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-var-lib-calico\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.356036 kubelet[2667]: I0913 00:54:00.356014 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-cni-log-dir\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.356116 kubelet[2667]: I0913 00:54:00.356105 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c31fdf0-d773-478e-b8dd-41a9a84039eb-policysync\") pod \"calico-node-wlxf8\" (UID: \"1c31fdf0-d773-478e-b8dd-41a9a84039eb\") " pod="calico-system/calico-node-wlxf8" Sep 13 00:54:00.361373 env[1564]: time="2025-09-13T00:54:00.361302076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:00.361373 env[1564]: time="2025-09-13T00:54:00.361340276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:00.361373 env[1564]: time="2025-09-13T00:54:00.361355576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:00.361702 env[1564]: time="2025-09-13T00:54:00.361663074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5811b819acd98fd49d4b38a3eee40a617be17d80205b63bd4e8e11d6f8dfc030 pid=3064 runtime=io.containerd.runc.v2 Sep 13 00:54:00.421644 env[1564]: time="2025-09-13T00:54:00.420987292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784948978f-ch8mh,Uid:0abbee19-bba1-4425-8f7e-dc9a73479417,Namespace:calico-system,Attempt:0,} returns sandbox id \"5811b819acd98fd49d4b38a3eee40a617be17d80205b63bd4e8e11d6f8dfc030\"" Sep 13 00:54:00.423542 env[1564]: time="2025-09-13T00:54:00.422970679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460288 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.464415 kubelet[2667]: W0913 00:54:00.460321 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460353 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460591 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.464415 kubelet[2667]: W0913 00:54:00.460600 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460617 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460796 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.464415 kubelet[2667]: W0913 00:54:00.460805 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.460817 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.464415 kubelet[2667]: E0913 00:54:00.461042 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.464923 kubelet[2667]: W0913 00:54:00.461052 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.464923 kubelet[2667]: E0913 00:54:00.461063 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.468288 kubelet[2667]: E0913 00:54:00.468262 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.468288 kubelet[2667]: W0913 00:54:00.468283 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.468438 kubelet[2667]: E0913 00:54:00.468303 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.473776 kubelet[2667]: E0913 00:54:00.473749 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.473911 kubelet[2667]: W0913 00:54:00.473895 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.474007 kubelet[2667]: E0913 00:54:00.473995 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.541888 kubelet[2667]: E0913 00:54:00.541837 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:00.544930 env[1564]: time="2025-09-13T00:54:00.544890894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlxf8,Uid:1c31fdf0-d773-478e-b8dd-41a9a84039eb,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:00.557439 kubelet[2667]: E0913 00:54:00.557349 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.557439 kubelet[2667]: W0913 00:54:00.557377 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.557439 kubelet[2667]: E0913 00:54:00.557401 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.557723 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559159 kubelet[2667]: W0913 00:54:00.557736 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.557753 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.557938 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559159 kubelet[2667]: W0913 00:54:00.557946 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.557956 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.558129 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559159 kubelet[2667]: W0913 00:54:00.558136 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.558145 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559159 kubelet[2667]: E0913 00:54:00.558347 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559620 kubelet[2667]: W0913 00:54:00.558356 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558365 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558526 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559620 kubelet[2667]: W0913 00:54:00.558533 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558542 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558691 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559620 kubelet[2667]: W0913 00:54:00.558698 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558706 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559620 kubelet[2667]: E0913 00:54:00.558863 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559620 kubelet[2667]: W0913 00:54:00.558871 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.558880 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559059 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559968 kubelet[2667]: W0913 00:54:00.559068 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559079 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559280 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559968 kubelet[2667]: W0913 00:54:00.559289 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559300 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559465 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.559968 kubelet[2667]: W0913 00:54:00.559472 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.559968 kubelet[2667]: E0913 00:54:00.559482 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.559647 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560364 kubelet[2667]: W0913 00:54:00.559655 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.559664 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.559852 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560364 kubelet[2667]: W0913 00:54:00.559860 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.559873 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.560086 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560364 kubelet[2667]: W0913 00:54:00.560094 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.560104 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560364 kubelet[2667]: E0913 00:54:00.560284 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560739 kubelet[2667]: W0913 00:54:00.560292 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560739 kubelet[2667]: E0913 00:54:00.560302 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560739 kubelet[2667]: E0913 00:54:00.560473 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560739 kubelet[2667]: W0913 00:54:00.560481 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560739 kubelet[2667]: E0913 00:54:00.560490 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.560739 kubelet[2667]: E0913 00:54:00.560669 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.560739 kubelet[2667]: W0913 00:54:00.560677 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.560739 kubelet[2667]: E0913 00:54:00.560686 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.561039 kubelet[2667]: E0913 00:54:00.560853 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.561039 kubelet[2667]: W0913 00:54:00.560861 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.561039 kubelet[2667]: E0913 00:54:00.560870 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.561039 kubelet[2667]: E0913 00:54:00.561038 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.561213 kubelet[2667]: W0913 00:54:00.561046 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.561213 kubelet[2667]: E0913 00:54:00.561055 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.561283 kubelet[2667]: E0913 00:54:00.561228 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.561283 kubelet[2667]: W0913 00:54:00.561236 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.561283 kubelet[2667]: E0913 00:54:00.561246 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564256 kubelet[2667]: E0913 00:54:00.561576 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.564256 kubelet[2667]: W0913 00:54:00.561588 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.564256 kubelet[2667]: E0913 00:54:00.561602 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564256 kubelet[2667]: I0913 00:54:00.561637 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79f7874d-3642-4f78-9634-0fbf12fe0b02-socket-dir\") pod \"csi-node-driver-kg2m8\" (UID: \"79f7874d-3642-4f78-9634-0fbf12fe0b02\") " pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:00.564256 kubelet[2667]: E0913 00:54:00.561873 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.564256 kubelet[2667]: W0913 00:54:00.561901 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.564256 kubelet[2667]: E0913 00:54:00.561917 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564256 kubelet[2667]: I0913 00:54:00.561938 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmkt\" (UniqueName: \"kubernetes.io/projected/79f7874d-3642-4f78-9634-0fbf12fe0b02-kube-api-access-pcmkt\") pod \"csi-node-driver-kg2m8\" (UID: \"79f7874d-3642-4f78-9634-0fbf12fe0b02\") " pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:00.564256 kubelet[2667]: E0913 00:54:00.562228 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.564608 kubelet[2667]: W0913 00:54:00.562240 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.564608 kubelet[2667]: E0913 00:54:00.562256 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564608 kubelet[2667]: E0913 00:54:00.562575 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.564608 kubelet[2667]: W0913 00:54:00.562588 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.564608 kubelet[2667]: E0913 00:54:00.562624 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564608 kubelet[2667]: E0913 00:54:00.562841 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.564608 kubelet[2667]: W0913 00:54:00.562851 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.564608 kubelet[2667]: E0913 00:54:00.562866 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.564608 kubelet[2667]: I0913 00:54:00.562889 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79f7874d-3642-4f78-9634-0fbf12fe0b02-kubelet-dir\") pod \"csi-node-driver-kg2m8\" (UID: \"79f7874d-3642-4f78-9634-0fbf12fe0b02\") " pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:00.565207 kubelet[2667]: E0913 00:54:00.564950 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.565207 kubelet[2667]: W0913 00:54:00.564966 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.565207 kubelet[2667]: E0913 00:54:00.565061 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.565207 kubelet[2667]: I0913 00:54:00.565180 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79f7874d-3642-4f78-9634-0fbf12fe0b02-registration-dir\") pod \"csi-node-driver-kg2m8\" (UID: \"79f7874d-3642-4f78-9634-0fbf12fe0b02\") " pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565332 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567585 kubelet[2667]: W0913 00:54:00.565341 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565420 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565541 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567585 kubelet[2667]: W0913 00:54:00.565549 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565563 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565719 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567585 kubelet[2667]: W0913 00:54:00.565726 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.567585 kubelet[2667]: E0913 00:54:00.565738 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.567911 kubelet[2667]: I0913 00:54:00.565766 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/79f7874d-3642-4f78-9634-0fbf12fe0b02-varrun\") pod \"csi-node-driver-kg2m8\" (UID: \"79f7874d-3642-4f78-9634-0fbf12fe0b02\") " pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:00.567911 kubelet[2667]: E0913 00:54:00.565928 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567911 kubelet[2667]: W0913 00:54:00.565938 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.567911 kubelet[2667]: E0913 00:54:00.565950 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.567911 kubelet[2667]: E0913 00:54:00.566124 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567911 kubelet[2667]: W0913 00:54:00.566133 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.567911 kubelet[2667]: E0913 00:54:00.566152 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.567911 kubelet[2667]: E0913 00:54:00.566725 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.567911 kubelet[2667]: W0913 00:54:00.566752 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.568279 kubelet[2667]: E0913 00:54:00.566770 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.568279 kubelet[2667]: E0913 00:54:00.568014 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.568279 kubelet[2667]: W0913 00:54:00.568026 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.568279 kubelet[2667]: E0913 00:54:00.568041 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.568279 kubelet[2667]: E0913 00:54:00.568231 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.568279 kubelet[2667]: W0913 00:54:00.568240 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.568279 kubelet[2667]: E0913 00:54:00.568251 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.568562 kubelet[2667]: E0913 00:54:00.568421 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.568562 kubelet[2667]: W0913 00:54:00.568428 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.568562 kubelet[2667]: E0913 00:54:00.568437 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.590182 env[1564]: time="2025-09-13T00:54:00.584937436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:00.590182 env[1564]: time="2025-09-13T00:54:00.585015436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:00.590182 env[1564]: time="2025-09-13T00:54:00.585043036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:00.590182 env[1564]: time="2025-09-13T00:54:00.585225935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3 pid=3161 runtime=io.containerd.runc.v2 Sep 13 00:54:00.612239 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:54:00.612336 kernel: audit: type=1325 audit(1757724840.606:301): table=filter:104 family=2 entries=20 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:00.606000 audit[3180]: NETFILTER_CFG table=filter:104 family=2 entries=20 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:00.606000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff441551c0 a2=0 a3=7fff441551ac items=0 ppid=2776 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:00.646808 kernel: audit: type=1300 audit(1757724840.606:301): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff441551c0 a2=0 a3=7fff441551ac items=0 ppid=2776 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:00.658796 kernel: audit: type=1327 audit(1757724840.606:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:00.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:00.673381 kernel: audit: type=1325 audit(1757724840.623:302): table=nat:105 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:00.623000 audit[3180]: NETFILTER_CFG table=nat:105 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.673794 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.675630 kubelet[2667]: W0913 00:54:00.673816 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.673839 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.674100 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.675630 kubelet[2667]: W0913 00:54:00.674113 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.674141 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.674369 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.675630 kubelet[2667]: W0913 00:54:00.674379 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.674390 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.675630 kubelet[2667]: E0913 00:54:00.674593 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.676108 kubelet[2667]: W0913 00:54:00.674603 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.676108 kubelet[2667]: E0913 00:54:00.674614 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.676108 kubelet[2667]: E0913 00:54:00.674820 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.676108 kubelet[2667]: W0913 00:54:00.674828 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.676108 kubelet[2667]: E0913 00:54:00.674842 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.676450 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.679114 kubelet[2667]: W0913 00:54:00.676471 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.676490 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.676703 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.679114 kubelet[2667]: W0913 00:54:00.676712 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.678099 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.678385 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.679114 kubelet[2667]: W0913 00:54:00.678398 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.678464 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.679114 kubelet[2667]: E0913 00:54:00.678674 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.679735 kubelet[2667]: W0913 00:54:00.678706 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.679735 kubelet[2667]: E0913 00:54:00.678789 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.679735 kubelet[2667]: E0913 00:54:00.678962 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.679735 kubelet[2667]: W0913 00:54:00.678970 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.679735 kubelet[2667]: E0913 00:54:00.679036 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.680732 kubelet[2667]: E0913 00:54:00.680046 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.680732 kubelet[2667]: W0913 00:54:00.680058 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.680732 kubelet[2667]: E0913 00:54:00.680106 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.680732 kubelet[2667]: E0913 00:54:00.680375 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.680732 kubelet[2667]: W0913 00:54:00.680385 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.680732 kubelet[2667]: E0913 00:54:00.680426 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.680732 kubelet[2667]: E0913 00:54:00.680627 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.680732 kubelet[2667]: W0913 00:54:00.680636 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.681488 kubelet[2667]: E0913 00:54:00.680908 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.683655 kubelet[2667]: E0913 00:54:00.681548 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.683655 kubelet[2667]: W0913 00:54:00.681567 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.684113 kubelet[2667]: E0913 00:54:00.681619 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.684113 kubelet[2667]: E0913 00:54:00.684016 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.684113 kubelet[2667]: W0913 00:54:00.684035 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.684447 kubelet[2667]: E0913 00:54:00.684347 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.684570 kubelet[2667]: E0913 00:54:00.684559 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.684775 kubelet[2667]: W0913 00:54:00.684643 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.684961 kubelet[2667]: E0913 00:54:00.684949 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.685216 kubelet[2667]: E0913 00:54:00.685175 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.685299 kubelet[2667]: W0913 00:54:00.685287 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.685437 kubelet[2667]: E0913 00:54:00.685425 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.685680 kubelet[2667]: E0913 00:54:00.685665 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.685759 kubelet[2667]: W0913 00:54:00.685749 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.685898 kubelet[2667]: E0913 00:54:00.685886 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.686238 kubelet[2667]: E0913 00:54:00.686228 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.686329 kubelet[2667]: W0913 00:54:00.686319 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.686467 kubelet[2667]: E0913 00:54:00.686455 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.687014 kubelet[2667]: E0913 00:54:00.686997 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.687982 kubelet[2667]: W0913 00:54:00.687965 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.688166 kubelet[2667]: E0913 00:54:00.688145 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.688685 kubelet[2667]: E0913 00:54:00.688674 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.688813 kubelet[2667]: W0913 00:54:00.688801 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.689065 kubelet[2667]: E0913 00:54:00.689047 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.689278 kubelet[2667]: E0913 00:54:00.689265 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.689330 kubelet[2667]: W0913 00:54:00.689278 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.689542 kubelet[2667]: E0913 00:54:00.689531 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.719256 kernel: audit: type=1300 audit(1757724840.623:302): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff441551c0 a2=0 a3=0 items=0 ppid=2776 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:00.623000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff441551c0 a2=0 a3=0 items=0 ppid=2776 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:00.719507 env[1564]: time="2025-09-13T00:54:00.706311955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlxf8,Uid:1c31fdf0-d773-478e-b8dd-41a9a84039eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\"" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.689771 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.719576 kubelet[2667]: W0913 00:54:00.689782 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.689882 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.689997 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.719576 kubelet[2667]: W0913 00:54:00.690004 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.690079 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.690365 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.719576 kubelet[2667]: W0913 00:54:00.690374 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.719576 kubelet[2667]: E0913 00:54:00.690383 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:00.726802 kubelet[2667]: E0913 00:54:00.726784 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:00.726939 kubelet[2667]: W0913 00:54:00.726927 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:00.727009 kubelet[2667]: E0913 00:54:00.726998 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:00.737228 kernel: audit: type=1327 audit(1757724840.623:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:01.602767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683162202.mount: Deactivated successfully. Sep 13 00:54:01.779351 kubelet[2667]: E0913 00:54:01.778671 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:02.793724 env[1564]: time="2025-09-13T00:54:02.793592595Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.800960 env[1564]: time="2025-09-13T00:54:02.800922550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.804771 env[1564]: time="2025-09-13T00:54:02.804738326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.808438 env[1564]: time="2025-09-13T00:54:02.808410304Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.808825 env[1564]: time="2025-09-13T00:54:02.808793501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:54:02.820307 env[1564]: time="2025-09-13T00:54:02.820252831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:54:02.827293 env[1564]: time="2025-09-13T00:54:02.827252788Z" level=info msg="CreateContainer within sandbox \"5811b819acd98fd49d4b38a3eee40a617be17d80205b63bd4e8e11d6f8dfc030\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:54:02.866557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768797186.mount: Deactivated successfully. Sep 13 00:54:02.872572 env[1564]: time="2025-09-13T00:54:02.872528209Z" level=info msg="CreateContainer within sandbox \"5811b819acd98fd49d4b38a3eee40a617be17d80205b63bd4e8e11d6f8dfc030\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a4047d827721f6898c7ac3f3820f235cf1a7eaac7ffd17734852cfc501813c9e\"" Sep 13 00:54:02.874321 env[1564]: time="2025-09-13T00:54:02.873249105Z" level=info msg="StartContainer for \"a4047d827721f6898c7ac3f3820f235cf1a7eaac7ffd17734852cfc501813c9e\"" Sep 13 00:54:02.941993 env[1564]: time="2025-09-13T00:54:02.941951382Z" level=info msg="StartContainer for \"a4047d827721f6898c7ac3f3820f235cf1a7eaac7ffd17734852cfc501813c9e\" returns successfully" Sep 13 00:54:03.779600 kubelet[2667]: E0913 00:54:03.778741 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:03.893923 kubelet[2667]: E0913 00:54:03.893893 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.894707 kubelet[2667]: W0913 00:54:03.894685 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.894841 kubelet[2667]: E0913 00:54:03.894827 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.895263 kubelet[2667]: E0913 00:54:03.895249 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.895382 kubelet[2667]: W0913 00:54:03.895371 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.895461 kubelet[2667]: E0913 00:54:03.895450 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.895729 kubelet[2667]: E0913 00:54:03.895718 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.895814 kubelet[2667]: W0913 00:54:03.895804 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.895888 kubelet[2667]: E0913 00:54:03.895877 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.896142 kubelet[2667]: E0913 00:54:03.896133 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.896243 kubelet[2667]: W0913 00:54:03.896234 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.896323 kubelet[2667]: E0913 00:54:03.896313 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.896634 kubelet[2667]: E0913 00:54:03.896624 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.896715 kubelet[2667]: W0913 00:54:03.896707 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.896787 kubelet[2667]: E0913 00:54:03.896766 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.897031 kubelet[2667]: E0913 00:54:03.897011 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.897116 kubelet[2667]: W0913 00:54:03.897107 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.897180 kubelet[2667]: E0913 00:54:03.897169 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.897428 kubelet[2667]: E0913 00:54:03.897419 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.897506 kubelet[2667]: W0913 00:54:03.897497 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.897564 kubelet[2667]: E0913 00:54:03.897554 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.897787 kubelet[2667]: E0913 00:54:03.897779 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.897858 kubelet[2667]: W0913 00:54:03.897850 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.897922 kubelet[2667]: E0913 00:54:03.897913 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.898174 kubelet[2667]: E0913 00:54:03.898164 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.898276 kubelet[2667]: W0913 00:54:03.898266 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.898336 kubelet[2667]: E0913 00:54:03.898326 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.898560 kubelet[2667]: E0913 00:54:03.898551 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.898632 kubelet[2667]: W0913 00:54:03.898624 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.898704 kubelet[2667]: E0913 00:54:03.898681 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.898919 kubelet[2667]: E0913 00:54:03.898911 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.899041 kubelet[2667]: W0913 00:54:03.899030 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.899107 kubelet[2667]: E0913 00:54:03.899097 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.899364 kubelet[2667]: E0913 00:54:03.899354 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.899440 kubelet[2667]: W0913 00:54:03.899432 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.899502 kubelet[2667]: E0913 00:54:03.899492 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.899719 kubelet[2667]: E0913 00:54:03.899710 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.899785 kubelet[2667]: W0913 00:54:03.899776 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.899849 kubelet[2667]: E0913 00:54:03.899840 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.900072 kubelet[2667]: E0913 00:54:03.900064 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.900136 kubelet[2667]: W0913 00:54:03.900126 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.900282 kubelet[2667]: E0913 00:54:03.900272 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.900508 kubelet[2667]: E0913 00:54:03.900499 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.900573 kubelet[2667]: W0913 00:54:03.900565 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.900630 kubelet[2667]: E0913 00:54:03.900621 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.900932 kubelet[2667]: E0913 00:54:03.900922 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.901012 kubelet[2667]: W0913 00:54:03.900999 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.901072 kubelet[2667]: E0913 00:54:03.901062 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.901346 kubelet[2667]: E0913 00:54:03.901335 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.901426 kubelet[2667]: W0913 00:54:03.901417 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.901486 kubelet[2667]: E0913 00:54:03.901477 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.901783 kubelet[2667]: E0913 00:54:03.901758 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.901871 kubelet[2667]: W0913 00:54:03.901862 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.901948 kubelet[2667]: E0913 00:54:03.901939 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.902301 kubelet[2667]: E0913 00:54:03.902263 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.902301 kubelet[2667]: W0913 00:54:03.902285 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.902427 kubelet[2667]: E0913 00:54:03.902304 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.902574 kubelet[2667]: E0913 00:54:03.902565 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.902642 kubelet[2667]: W0913 00:54:03.902633 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.902721 kubelet[2667]: E0913 00:54:03.902710 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.902939 kubelet[2667]: E0913 00:54:03.902930 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.903006 kubelet[2667]: W0913 00:54:03.902997 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.903069 kubelet[2667]: E0913 00:54:03.903059 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.903335 kubelet[2667]: E0913 00:54:03.903318 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.903335 kubelet[2667]: W0913 00:54:03.903332 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.903442 kubelet[2667]: E0913 00:54:03.903348 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.903547 kubelet[2667]: E0913 00:54:03.903538 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.903608 kubelet[2667]: W0913 00:54:03.903600 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.903684 kubelet[2667]: E0913 00:54:03.903670 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.903877 kubelet[2667]: E0913 00:54:03.903867 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.903945 kubelet[2667]: W0913 00:54:03.903936 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.904015 kubelet[2667]: E0913 00:54:03.904001 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.904272 kubelet[2667]: E0913 00:54:03.904262 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.904350 kubelet[2667]: W0913 00:54:03.904341 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.904416 kubelet[2667]: E0913 00:54:03.904405 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.904646 kubelet[2667]: E0913 00:54:03.904631 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.904695 kubelet[2667]: W0913 00:54:03.904647 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.904695 kubelet[2667]: E0913 00:54:03.904660 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.904868 kubelet[2667]: E0913 00:54:03.904860 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.904927 kubelet[2667]: W0913 00:54:03.904919 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.904988 kubelet[2667]: E0913 00:54:03.904978 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.905422 kubelet[2667]: E0913 00:54:03.905411 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.905498 kubelet[2667]: W0913 00:54:03.905489 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.905565 kubelet[2667]: E0913 00:54:03.905555 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.905831 kubelet[2667]: E0913 00:54:03.905820 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.905903 kubelet[2667]: W0913 00:54:03.905894 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.905979 kubelet[2667]: E0913 00:54:03.905969 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.906237 kubelet[2667]: E0913 00:54:03.906223 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.906303 kubelet[2667]: W0913 00:54:03.906238 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.906303 kubelet[2667]: E0913 00:54:03.906249 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.906668 kubelet[2667]: E0913 00:54:03.906406 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.906668 kubelet[2667]: W0913 00:54:03.906415 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.906668 kubelet[2667]: E0913 00:54:03.906424 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.906668 kubelet[2667]: E0913 00:54:03.906595 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.906668 kubelet[2667]: W0913 00:54:03.906603 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.906668 kubelet[2667]: E0913 00:54:03.906612 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:03.907097 kubelet[2667]: E0913 00:54:03.907082 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:03.907148 kubelet[2667]: W0913 00:54:03.907099 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:03.907148 kubelet[2667]: E0913 00:54:03.907110 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:04.021687 env[1564]: time="2025-09-13T00:54:04.021642288Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:04.028480 env[1564]: time="2025-09-13T00:54:04.028415848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:04.032708 env[1564]: time="2025-09-13T00:54:04.032109227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:04.035702 env[1564]: time="2025-09-13T00:54:04.035670706Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:04.036163 env[1564]: time="2025-09-13T00:54:04.036134303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:54:04.038542 env[1564]: time="2025-09-13T00:54:04.038513089Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:54:04.084423 env[1564]: time="2025-09-13T00:54:04.084376119Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1\"" Sep 13 00:54:04.085128 env[1564]: time="2025-09-13T00:54:04.085099115Z" level=info msg="StartContainer for \"d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1\"" Sep 13 00:54:04.121678 systemd[1]: run-containerd-runc-k8s.io-d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1-runc.cAskFr.mount: Deactivated successfully. Sep 13 00:54:04.172867 env[1564]: time="2025-09-13T00:54:04.172829600Z" level=info msg="StartContainer for \"d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1\" returns successfully" Sep 13 00:54:04.816194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1-rootfs.mount: Deactivated successfully. Sep 13 00:54:04.869123 kubelet[2667]: I0913 00:54:04.869098 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:04.885539 kubelet[2667]: I0913 00:54:04.885485 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784948978f-ch8mh" podStartSLOduration=3.498289198 podStartE2EDuration="5.885465012s" podCreationTimestamp="2025-09-13 00:53:59 +0000 UTC" firstStartedPulling="2025-09-13 00:54:00.422618681 +0000 UTC m=+20.798672069" lastFinishedPulling="2025-09-13 00:54:02.809794595 +0000 UTC m=+23.185847883" observedRunningTime="2025-09-13 00:54:03.880474634 +0000 UTC m=+24.256528022" watchObservedRunningTime="2025-09-13 00:54:04.885465012 +0000 UTC m=+25.261518700" Sep 13 00:54:05.725741 env[1564]: time="2025-09-13T00:54:05.725667368Z" level=info msg="shim disconnected" id=d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1 Sep 13 00:54:05.725741 env[1564]: time="2025-09-13T00:54:05.725734568Z" level=warning msg="cleaning up after shim disconnected" id=d28dfa9d477b8a8c788279814068b1fe2375ebaf894a2eee78b1e9f27763b1b1 namespace=k8s.io Sep 13 00:54:05.725741 env[1564]: time="2025-09-13T00:54:05.725747368Z" level=info msg="cleaning up dead shim" Sep 13 00:54:05.733682 env[1564]: time="2025-09-13T00:54:05.733636123Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3357 runtime=io.containerd.runc.v2\n" Sep 13 00:54:05.780213 kubelet[2667]: E0913 00:54:05.779232 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:05.872828 env[1564]: time="2025-09-13T00:54:05.872780323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:54:07.779228 kubelet[2667]: E0913 00:54:07.778592 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:09.624680 env[1564]: time="2025-09-13T00:54:09.624633398Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.633616 env[1564]: time="2025-09-13T00:54:09.633566251Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.637235 env[1564]: time="2025-09-13T00:54:09.637204232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.641490 env[1564]: time="2025-09-13T00:54:09.641461609Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.642390 env[1564]: time="2025-09-13T00:54:09.642359604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:54:09.645145 env[1564]: time="2025-09-13T00:54:09.645115590Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:54:09.684783 env[1564]: time="2025-09-13T00:54:09.684737381Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385\"" Sep 13 00:54:09.686393 env[1564]: time="2025-09-13T00:54:09.685340078Z" level=info msg="StartContainer for \"6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385\"" Sep 13 00:54:09.746491 env[1564]: time="2025-09-13T00:54:09.746417956Z" level=info msg="StartContainer for \"6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385\" returns successfully" Sep 13 00:54:09.779129 kubelet[2667]: E0913 00:54:09.779092 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:11.444730 env[1564]: time="2025-09-13T00:54:11.444661417Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:11.470602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385-rootfs.mount: Deactivated successfully. Sep 13 00:54:11.515834 kubelet[2667]: I0913 00:54:11.515811 2667 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:54:11.652175 kubelet[2667]: I0913 00:54:11.652132 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15e9425d-6b94-4324-8278-89ee850f4d55-calico-apiserver-certs\") pod \"calico-apiserver-67c4bc6787-kfxxk\" (UID: \"15e9425d-6b94-4324-8278-89ee850f4d55\") " pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" Sep 13 00:54:11.652175 kubelet[2667]: I0913 00:54:11.652177 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9rf\" (UniqueName: \"kubernetes.io/projected/87b59c29-b153-48e3-b1f8-09c6220faf33-kube-api-access-5c9rf\") pod \"whisker-76f544d5bb-l8svr\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " pod="calico-system/whisker-76f544d5bb-l8svr" Sep 13 00:54:11.652409 kubelet[2667]: I0913 00:54:11.652220 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0838642-fa34-4ba2-b6e9-33b770e8c2d4-tigera-ca-bundle\") pod \"calico-kube-controllers-77c444948d-hspwf\" (UID: \"e0838642-fa34-4ba2-b6e9-33b770e8c2d4\") " pod="calico-system/calico-kube-controllers-77c444948d-hspwf" Sep 13 00:54:11.652409 kubelet[2667]: I0913 00:54:11.652244 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcls\" (UniqueName: \"kubernetes.io/projected/15e9425d-6b94-4324-8278-89ee850f4d55-kube-api-access-cbcls\") pod \"calico-apiserver-67c4bc6787-kfxxk\" (UID: \"15e9425d-6b94-4324-8278-89ee850f4d55\") " pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" Sep 13 00:54:11.652409 kubelet[2667]: I0913 00:54:11.652267 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-backend-key-pair\") pod \"whisker-76f544d5bb-l8svr\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " pod="calico-system/whisker-76f544d5bb-l8svr" Sep 13 00:54:11.652409 kubelet[2667]: I0913 00:54:11.652287 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-ca-bundle\") pod \"whisker-76f544d5bb-l8svr\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " pod="calico-system/whisker-76f544d5bb-l8svr" Sep 13 00:54:11.652409 kubelet[2667]: I0913 00:54:11.652315 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slr5m\" (UniqueName: \"kubernetes.io/projected/e0838642-fa34-4ba2-b6e9-33b770e8c2d4-kube-api-access-slr5m\") pod \"calico-kube-controllers-77c444948d-hspwf\" (UID: \"e0838642-fa34-4ba2-b6e9-33b770e8c2d4\") " pod="calico-system/calico-kube-controllers-77c444948d-hspwf" Sep 13 00:54:11.752842 kubelet[2667]: I0913 00:54:11.752724 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kzm6\" (UniqueName: \"kubernetes.io/projected/befb9c20-74f9-48dc-9181-e5e1cb0477a7-kube-api-access-5kzm6\") pod \"calico-apiserver-748d86bbf-sqqp6\" (UID: \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\") " pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" Sep 13 00:54:11.753061 kubelet[2667]: I0913 00:54:11.753042 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/befb9c20-74f9-48dc-9181-e5e1cb0477a7-calico-apiserver-certs\") pod \"calico-apiserver-748d86bbf-sqqp6\" (UID: \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\") " pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" Sep 13 00:54:11.753165 kubelet[2667]: I0913 00:54:11.753152 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8f79\" (UniqueName: \"kubernetes.io/projected/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-kube-api-access-m8f79\") pod \"calico-apiserver-748d86bbf-zd92h\" (UID: \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\") " pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" Sep 13 00:54:11.753276 kubelet[2667]: I0913 00:54:11.753263 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ce3dd00f-1685-4fb8-a21f-eacbff2544a7-goldmane-key-pair\") pod \"goldmane-7988f88666-rtn9j\" (UID: \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\") " pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:11.753350 kubelet[2667]: I0913 00:54:11.753339 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj5l8\" (UniqueName: \"kubernetes.io/projected/ce3dd00f-1685-4fb8-a21f-eacbff2544a7-kube-api-access-dj5l8\") pod \"goldmane-7988f88666-rtn9j\" (UID: \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\") " pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:11.753427 kubelet[2667]: I0913 00:54:11.753413 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0603fd61-ad5b-4bb1-81d5-450dc870214c-config-volume\") pod \"coredns-7c65d6cfc9-794cx\" (UID: \"0603fd61-ad5b-4bb1-81d5-450dc870214c\") " pod="kube-system/coredns-7c65d6cfc9-794cx" Sep 13 00:54:11.753507 kubelet[2667]: I0913 00:54:11.753495 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs22\" (UniqueName: \"kubernetes.io/projected/0603fd61-ad5b-4bb1-81d5-450dc870214c-kube-api-access-2hs22\") pod \"coredns-7c65d6cfc9-794cx\" (UID: \"0603fd61-ad5b-4bb1-81d5-450dc870214c\") " pod="kube-system/coredns-7c65d6cfc9-794cx" Sep 13 00:54:11.753600 kubelet[2667]: I0913 00:54:11.753587 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3dd00f-1685-4fb8-a21f-eacbff2544a7-config\") pod \"goldmane-7988f88666-rtn9j\" (UID: \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\") " pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:11.753682 kubelet[2667]: I0913 00:54:11.753669 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclql\" (UniqueName: \"kubernetes.io/projected/1a2d03cb-de38-46e3-bef7-5c63c9032e67-kube-api-access-cclql\") pod \"coredns-7c65d6cfc9-skrlz\" (UID: \"1a2d03cb-de38-46e3-bef7-5c63c9032e67\") " pod="kube-system/coredns-7c65d6cfc9-skrlz" Sep 13 00:54:11.753761 kubelet[2667]: I0913 00:54:11.753749 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a2d03cb-de38-46e3-bef7-5c63c9032e67-config-volume\") pod \"coredns-7c65d6cfc9-skrlz\" (UID: \"1a2d03cb-de38-46e3-bef7-5c63c9032e67\") " pod="kube-system/coredns-7c65d6cfc9-skrlz" Sep 13 00:54:11.753908 kubelet[2667]: I0913 00:54:11.753895 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce3dd00f-1685-4fb8-a21f-eacbff2544a7-goldmane-ca-bundle\") pod \"goldmane-7988f88666-rtn9j\" (UID: \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\") " pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:11.754897 kubelet[2667]: I0913 00:54:11.754868 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-calico-apiserver-certs\") pod \"calico-apiserver-748d86bbf-zd92h\" (UID: \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\") " pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" Sep 13 00:54:13.937969 kubelet[2667]: E0913 00:54:13.937938 2667 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.15s" Sep 13 00:54:13.948222 env[1564]: time="2025-09-13T00:54:13.945341284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kg2m8,Uid:79f7874d-3642-4f78-9634-0fbf12fe0b02,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:13.961926 env[1564]: time="2025-09-13T00:54:13.961315207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c444948d-hspwf,Uid:e0838642-fa34-4ba2-b6e9-33b770e8c2d4,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:13.961926 env[1564]: time="2025-09-13T00:54:13.961694205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-kfxxk,Uid:15e9425d-6b94-4324-8278-89ee850f4d55,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:13.963952 env[1564]: time="2025-09-13T00:54:13.963795695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f544d5bb-l8svr,Uid:87b59c29-b153-48e3-b1f8-09c6220faf33,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:13.977039 env[1564]: time="2025-09-13T00:54:13.976993831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skrlz,Uid:1a2d03cb-de38-46e3-bef7-5c63c9032e67,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:13.977475 env[1564]: time="2025-09-13T00:54:13.977446129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-794cx,Uid:0603fd61-ad5b-4bb1-81d5-450dc870214c,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:13.980839 env[1564]: time="2025-09-13T00:54:13.980783213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-sqqp6,Uid:befb9c20-74f9-48dc-9181-e5e1cb0477a7,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:13.983502 env[1564]: time="2025-09-13T00:54:13.983477100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rtn9j,Uid:ce3dd00f-1685-4fb8-a21f-eacbff2544a7,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:13.986165 env[1564]: time="2025-09-13T00:54:13.986140487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-zd92h,Uid:64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:14.167894 env[1564]: time="2025-09-13T00:54:14.167853122Z" level=info msg="shim disconnected" id=6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385 Sep 13 00:54:14.167894 env[1564]: time="2025-09-13T00:54:14.167897322Z" level=warning msg="cleaning up after shim disconnected" id=6889d4e240b26709f39d98e68f0a05991482bf4124867e1cfee76398d0f00385 namespace=k8s.io Sep 13 00:54:14.168094 env[1564]: time="2025-09-13T00:54:14.167908922Z" level=info msg="cleaning up dead shim" Sep 13 00:54:14.179372 env[1564]: time="2025-09-13T00:54:14.179332868Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3445 runtime=io.containerd.runc.v2\n" Sep 13 00:54:14.890025 env[1564]: time="2025-09-13T00:54:14.889973493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:54:16.565918 env[1564]: time="2025-09-13T00:54:16.565842034Z" level=error msg="Failed to destroy network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.566365 env[1564]: time="2025-09-13T00:54:16.566230732Z" level=error msg="encountered an error cleaning up failed sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.566365 env[1564]: time="2025-09-13T00:54:16.566284432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-zd92h,Uid:64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.566623 kubelet[2667]: E0913 00:54:16.566541 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.567113 kubelet[2667]: E0913 00:54:16.566657 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" Sep 13 00:54:16.567113 kubelet[2667]: E0913 00:54:16.566686 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" Sep 13 00:54:16.567113 kubelet[2667]: E0913 00:54:16.566741 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748d86bbf-zd92h_calico-apiserver(64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748d86bbf-zd92h_calico-apiserver(64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" podUID="64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" Sep 13 00:54:16.860245 env[1564]: time="2025-09-13T00:54:16.859750593Z" level=error msg="Failed to destroy network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.860391 env[1564]: time="2025-09-13T00:54:16.860351690Z" level=error msg="encountered an error cleaning up failed sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.860460 env[1564]: time="2025-09-13T00:54:16.860420990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f544d5bb-l8svr,Uid:87b59c29-b153-48e3-b1f8-09c6220faf33,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.860707 kubelet[2667]: E0913 00:54:16.860672 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.860791 kubelet[2667]: E0913 00:54:16.860732 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76f544d5bb-l8svr" Sep 13 00:54:16.861053 kubelet[2667]: E0913 00:54:16.860760 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76f544d5bb-l8svr" Sep 13 00:54:16.861167 kubelet[2667]: E0913 00:54:16.861095 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76f544d5bb-l8svr_calico-system(87b59c29-b153-48e3-b1f8-09c6220faf33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76f544d5bb-l8svr_calico-system(87b59c29-b153-48e3-b1f8-09c6220faf33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76f544d5bb-l8svr" podUID="87b59c29-b153-48e3-b1f8-09c6220faf33" Sep 13 00:54:16.893567 kubelet[2667]: I0913 00:54:16.893540 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:16.895478 env[1564]: time="2025-09-13T00:54:16.895055632Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:16.896714 kubelet[2667]: I0913 00:54:16.896300 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:16.896862 env[1564]: time="2025-09-13T00:54:16.896810524Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:54:16.946792 env[1564]: time="2025-09-13T00:54:16.946729796Z" level=error msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" failed" error="failed to destroy network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.947010 kubelet[2667]: E0913 00:54:16.946974 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:16.947106 kubelet[2667]: E0913 00:54:16.947041 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14"} Sep 13 00:54:16.947150 kubelet[2667]: E0913 00:54:16.947115 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87b59c29-b153-48e3-b1f8-09c6220faf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:16.947247 kubelet[2667]: E0913 00:54:16.947158 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87b59c29-b153-48e3-b1f8-09c6220faf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76f544d5bb-l8svr" podUID="87b59c29-b153-48e3-b1f8-09c6220faf33" Sep 13 00:54:16.949261 env[1564]: time="2025-09-13T00:54:16.949143185Z" level=error msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" failed" error="failed to destroy network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.949686 kubelet[2667]: E0913 00:54:16.949535 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:16.949686 kubelet[2667]: E0913 00:54:16.949579 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57"} Sep 13 00:54:16.949686 kubelet[2667]: E0913 00:54:16.949618 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:16.949686 kubelet[2667]: E0913 00:54:16.949645 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" podUID="64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" Sep 13 00:54:16.977469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14-shm.mount: Deactivated successfully. Sep 13 00:54:16.978007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57-shm.mount: Deactivated successfully. Sep 13 00:54:16.993336 env[1564]: time="2025-09-13T00:54:16.993280783Z" level=error msg="Failed to destroy network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.997600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9-shm.mount: Deactivated successfully. Sep 13 00:54:16.998828 env[1564]: time="2025-09-13T00:54:16.998785258Z" level=error msg="encountered an error cleaning up failed sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.998927 env[1564]: time="2025-09-13T00:54:16.998854858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skrlz,Uid:1a2d03cb-de38-46e3-bef7-5c63c9032e67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.999109 kubelet[2667]: E0913 00:54:16.999078 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:16.999209 kubelet[2667]: E0913 00:54:16.999136 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-skrlz" Sep 13 00:54:16.999209 kubelet[2667]: E0913 00:54:16.999163 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-skrlz" Sep 13 00:54:16.999597 kubelet[2667]: E0913 00:54:16.999247 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-skrlz_kube-system(1a2d03cb-de38-46e3-bef7-5c63c9032e67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-skrlz_kube-system(1a2d03cb-de38-46e3-bef7-5c63c9032e67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-skrlz" podUID="1a2d03cb-de38-46e3-bef7-5c63c9032e67" Sep 13 00:54:17.065651 env[1564]: time="2025-09-13T00:54:17.065593859Z" level=error msg="Failed to destroy network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.069375 env[1564]: time="2025-09-13T00:54:17.065991557Z" level=error msg="encountered an error cleaning up failed sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.069375 env[1564]: time="2025-09-13T00:54:17.066048257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-794cx,Uid:0603fd61-ad5b-4bb1-81d5-450dc870214c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.069490 kubelet[2667]: E0913 00:54:17.066274 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.069490 kubelet[2667]: E0913 00:54:17.066335 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-794cx" Sep 13 00:54:17.069490 kubelet[2667]: E0913 00:54:17.066363 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-794cx" Sep 13 00:54:17.069631 kubelet[2667]: E0913 00:54:17.066424 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-794cx_kube-system(0603fd61-ad5b-4bb1-81d5-450dc870214c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-794cx_kube-system(0603fd61-ad5b-4bb1-81d5-450dc870214c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-794cx" podUID="0603fd61-ad5b-4bb1-81d5-450dc870214c" Sep 13 00:54:17.071608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010-shm.mount: Deactivated successfully. Sep 13 00:54:17.119059 env[1564]: time="2025-09-13T00:54:17.117371027Z" level=error msg="Failed to destroy network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.122033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db-shm.mount: Deactivated successfully. Sep 13 00:54:17.126853 env[1564]: time="2025-09-13T00:54:17.126801585Z" level=error msg="encountered an error cleaning up failed sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.127042 env[1564]: time="2025-09-13T00:54:17.127004284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-sqqp6,Uid:befb9c20-74f9-48dc-9181-e5e1cb0477a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.127395 kubelet[2667]: E0913 00:54:17.127363 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.127497 kubelet[2667]: E0913 00:54:17.127420 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" Sep 13 00:54:17.127497 kubelet[2667]: E0913 00:54:17.127445 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" Sep 13 00:54:17.127576 kubelet[2667]: E0913 00:54:17.127507 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748d86bbf-sqqp6_calico-apiserver(befb9c20-74f9-48dc-9181-e5e1cb0477a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748d86bbf-sqqp6_calico-apiserver(befb9c20-74f9-48dc-9181-e5e1cb0477a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" podUID="befb9c20-74f9-48dc-9181-e5e1cb0477a7" Sep 13 00:54:17.168116 env[1564]: time="2025-09-13T00:54:17.168060301Z" level=error msg="Failed to destroy network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.171312 env[1564]: time="2025-09-13T00:54:17.168445799Z" level=error msg="encountered an error cleaning up failed sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.171312 env[1564]: time="2025-09-13T00:54:17.168500899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rtn9j,Uid:ce3dd00f-1685-4fb8-a21f-eacbff2544a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.171456 kubelet[2667]: E0913 00:54:17.168721 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.171456 kubelet[2667]: E0913 00:54:17.168784 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:17.171456 kubelet[2667]: E0913 00:54:17.168847 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rtn9j" Sep 13 00:54:17.171581 kubelet[2667]: E0913 00:54:17.168895 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-rtn9j_calico-system(ce3dd00f-1685-4fb8-a21f-eacbff2544a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-rtn9j_calico-system(ce3dd00f-1685-4fb8-a21f-eacbff2544a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rtn9j" podUID="ce3dd00f-1685-4fb8-a21f-eacbff2544a7" Sep 13 00:54:17.171533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719-shm.mount: Deactivated successfully. Sep 13 00:54:17.258258 env[1564]: time="2025-09-13T00:54:17.258180897Z" level=error msg="Failed to destroy network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.258614 env[1564]: time="2025-09-13T00:54:17.258574996Z" level=error msg="encountered an error cleaning up failed sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.258686 env[1564]: time="2025-09-13T00:54:17.258631295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c444948d-hspwf,Uid:e0838642-fa34-4ba2-b6e9-33b770e8c2d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.258914 kubelet[2667]: E0913 00:54:17.258872 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.258991 kubelet[2667]: E0913 00:54:17.258930 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c444948d-hspwf" Sep 13 00:54:17.258991 kubelet[2667]: E0913 00:54:17.258960 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c444948d-hspwf" Sep 13 00:54:17.259070 kubelet[2667]: E0913 00:54:17.259020 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77c444948d-hspwf_calico-system(e0838642-fa34-4ba2-b6e9-33b770e8c2d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77c444948d-hspwf_calico-system(e0838642-fa34-4ba2-b6e9-33b770e8c2d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77c444948d-hspwf" podUID="e0838642-fa34-4ba2-b6e9-33b770e8c2d4" Sep 13 00:54:17.305210 env[1564]: time="2025-09-13T00:54:17.305145987Z" level=error msg="Failed to destroy network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.305546 env[1564]: time="2025-09-13T00:54:17.305512586Z" level=error msg="encountered an error cleaning up failed sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.305619 env[1564]: time="2025-09-13T00:54:17.305574185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kg2m8,Uid:79f7874d-3642-4f78-9634-0fbf12fe0b02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.305824 kubelet[2667]: E0913 00:54:17.305793 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.305901 kubelet[2667]: E0913 00:54:17.305857 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:17.305901 kubelet[2667]: E0913 00:54:17.305887 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kg2m8" Sep 13 00:54:17.307182 kubelet[2667]: E0913 00:54:17.305963 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kg2m8_calico-system(79f7874d-3642-4f78-9634-0fbf12fe0b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kg2m8_calico-system(79f7874d-3642-4f78-9634-0fbf12fe0b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:17.394841 env[1564]: time="2025-09-13T00:54:17.392401697Z" level=error msg="Failed to destroy network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.394841 env[1564]: time="2025-09-13T00:54:17.392756195Z" level=error msg="encountered an error cleaning up failed sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.394841 env[1564]: time="2025-09-13T00:54:17.392816195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-kfxxk,Uid:15e9425d-6b94-4324-8278-89ee850f4d55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.395119 kubelet[2667]: E0913 00:54:17.393021 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:17.395119 kubelet[2667]: E0913 00:54:17.393100 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" Sep 13 00:54:17.395119 kubelet[2667]: E0913 00:54:17.393129 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" Sep 13 00:54:17.395245 kubelet[2667]: E0913 00:54:17.393182 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c4bc6787-kfxxk_calico-apiserver(15e9425d-6b94-4324-8278-89ee850f4d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c4bc6787-kfxxk_calico-apiserver(15e9425d-6b94-4324-8278-89ee850f4d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" podUID="15e9425d-6b94-4324-8278-89ee850f4d55" Sep 13 00:54:17.899449 kubelet[2667]: I0913 00:54:17.899113 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:17.899910 kubelet[2667]: I0913 00:54:17.899848 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:17.900480 env[1564]: time="2025-09-13T00:54:17.900434824Z" level=info msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" Sep 13 00:54:17.901519 env[1564]: time="2025-09-13T00:54:17.901025921Z" level=info msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" Sep 13 00:54:17.901606 kubelet[2667]: I0913 00:54:17.901102 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:17.902290 env[1564]: time="2025-09-13T00:54:17.902262815Z" level=info msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" Sep 13 00:54:17.903962 kubelet[2667]: I0913 00:54:17.903577 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:17.904259 env[1564]: time="2025-09-13T00:54:17.904237307Z" level=info msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" Sep 13 00:54:17.905684 kubelet[2667]: I0913 00:54:17.905664 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:17.906312 env[1564]: time="2025-09-13T00:54:17.906279897Z" level=info msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" Sep 13 00:54:17.907730 kubelet[2667]: I0913 00:54:17.907708 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:17.908393 env[1564]: time="2025-09-13T00:54:17.908349588Z" level=info msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" Sep 13 00:54:17.911665 kubelet[2667]: I0913 00:54:17.911574 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:17.912459 env[1564]: time="2025-09-13T00:54:17.912424570Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:54:17.970657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7-shm.mount: Deactivated successfully. Sep 13 00:54:17.970864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11-shm.mount: Deactivated successfully. Sep 13 00:54:17.971002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539-shm.mount: Deactivated successfully. Sep 13 00:54:18.025481 env[1564]: time="2025-09-13T00:54:18.025419366Z" level=error msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" failed" error="failed to destroy network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.026054 kubelet[2667]: E0913 00:54:18.025882 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:18.026054 kubelet[2667]: E0913 00:54:18.025940 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db"} Sep 13 00:54:18.026054 kubelet[2667]: E0913 00:54:18.025985 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.026054 kubelet[2667]: E0913 00:54:18.026017 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" podUID="befb9c20-74f9-48dc-9181-e5e1cb0477a7" Sep 13 00:54:18.030049 env[1564]: time="2025-09-13T00:54:18.030003746Z" level=error msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" failed" error="failed to destroy network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.030502 kubelet[2667]: E0913 00:54:18.030351 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:18.030502 kubelet[2667]: E0913 00:54:18.030390 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9"} Sep 13 00:54:18.030502 kubelet[2667]: E0913 00:54:18.030436 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a2d03cb-de38-46e3-bef7-5c63c9032e67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.030502 kubelet[2667]: E0913 00:54:18.030466 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a2d03cb-de38-46e3-bef7-5c63c9032e67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-skrlz" podUID="1a2d03cb-de38-46e3-bef7-5c63c9032e67" Sep 13 00:54:18.041808 env[1564]: time="2025-09-13T00:54:18.041765995Z" level=error msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" failed" error="failed to destroy network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.042204 kubelet[2667]: E0913 00:54:18.042054 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:18.042204 kubelet[2667]: E0913 00:54:18.042091 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7"} Sep 13 00:54:18.042204 kubelet[2667]: E0913 00:54:18.042134 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15e9425d-6b94-4324-8278-89ee850f4d55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.042204 kubelet[2667]: E0913 00:54:18.042161 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15e9425d-6b94-4324-8278-89ee850f4d55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" podUID="15e9425d-6b94-4324-8278-89ee850f4d55" Sep 13 00:54:18.062290 env[1564]: time="2025-09-13T00:54:18.062235605Z" level=error msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" failed" error="failed to destroy network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.062694 kubelet[2667]: E0913 00:54:18.062509 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:18.062694 kubelet[2667]: E0913 00:54:18.062565 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719"} Sep 13 00:54:18.062694 kubelet[2667]: E0913 00:54:18.062607 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.062694 kubelet[2667]: E0913 00:54:18.062638 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rtn9j" podUID="ce3dd00f-1685-4fb8-a21f-eacbff2544a7" Sep 13 00:54:18.063999 env[1564]: time="2025-09-13T00:54:18.063947797Z" level=error msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" failed" error="failed to destroy network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.064390 kubelet[2667]: E0913 00:54:18.064245 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:18.064390 kubelet[2667]: E0913 00:54:18.064286 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539"} Sep 13 00:54:18.064390 kubelet[2667]: E0913 00:54:18.064319 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0838642-fa34-4ba2-b6e9-33b770e8c2d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.064390 kubelet[2667]: E0913 00:54:18.064345 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0838642-fa34-4ba2-b6e9-33b770e8c2d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77c444948d-hspwf" podUID="e0838642-fa34-4ba2-b6e9-33b770e8c2d4" Sep 13 00:54:18.068066 env[1564]: time="2025-09-13T00:54:18.068022779Z" level=error msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" failed" error="failed to destroy network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.068389 kubelet[2667]: E0913 00:54:18.068248 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:18.068389 kubelet[2667]: E0913 00:54:18.068287 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010"} Sep 13 00:54:18.068389 kubelet[2667]: E0913 00:54:18.068320 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0603fd61-ad5b-4bb1-81d5-450dc870214c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.068389 kubelet[2667]: E0913 00:54:18.068347 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0603fd61-ad5b-4bb1-81d5-450dc870214c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-794cx" podUID="0603fd61-ad5b-4bb1-81d5-450dc870214c" Sep 13 00:54:18.075578 env[1564]: time="2025-09-13T00:54:18.075537946Z" level=error msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" failed" error="failed to destroy network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:18.075740 kubelet[2667]: E0913 00:54:18.075713 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:18.075813 kubelet[2667]: E0913 00:54:18.075747 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11"} Sep 13 00:54:18.075813 kubelet[2667]: E0913 00:54:18.075799 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79f7874d-3642-4f78-9634-0fbf12fe0b02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:18.075916 kubelet[2667]: E0913 00:54:18.075828 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79f7874d-3642-4f78-9634-0fbf12fe0b02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kg2m8" podUID="79f7874d-3642-4f78-9634-0fbf12fe0b02" Sep 13 00:54:23.829456 kubelet[2667]: I0913 00:54:23.829424 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:23.871000 audit[3834]: NETFILTER_CFG table=filter:106 family=2 entries=21 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:23.871000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2bf2edb0 a2=0 a3=7ffe2bf2ed9c items=0 ppid=2776 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:23.906539 kernel: audit: type=1325 audit(1757724863.871:303): table=filter:106 family=2 entries=21 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:23.906659 kernel: audit: type=1300 audit(1757724863.871:303): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2bf2edb0 a2=0 a3=7ffe2bf2ed9c items=0 ppid=2776 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:23.906695 kernel: audit: type=1327 audit(1757724863.871:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:23.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:23.918000 audit[3834]: NETFILTER_CFG table=nat:107 family=2 entries=19 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:23.918000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe2bf2edb0 a2=0 a3=7ffe2bf2ed9c items=0 ppid=2776 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:23.953424 kernel: audit: type=1325 audit(1757724863.918:304): table=nat:107 family=2 entries=19 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:23.953602 kernel: audit: type=1300 audit(1757724863.918:304): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe2bf2edb0 a2=0 a3=7ffe2bf2ed9c items=0 ppid=2776 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:23.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:23.954213 kernel: audit: type=1327 audit(1757724863.918:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:27.785080 env[1564]: time="2025-09-13T00:54:27.785032731Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:27.858570 env[1564]: time="2025-09-13T00:54:27.858500357Z" level=error msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" failed" error="failed to destroy network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:27.859127 kubelet[2667]: E0913 00:54:27.858959 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:27.859127 kubelet[2667]: E0913 00:54:27.859009 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14"} Sep 13 00:54:27.859127 kubelet[2667]: E0913 00:54:27.859052 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87b59c29-b153-48e3-b1f8-09c6220faf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:27.859127 kubelet[2667]: E0913 00:54:27.859082 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87b59c29-b153-48e3-b1f8-09c6220faf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76f544d5bb-l8svr" podUID="87b59c29-b153-48e3-b1f8-09c6220faf33" Sep 13 00:54:28.779529 env[1564]: time="2025-09-13T00:54:28.779487472Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:54:28.785026 env[1564]: time="2025-09-13T00:54:28.784987752Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:54:28.852363 env[1564]: time="2025-09-13T00:54:28.852300105Z" level=error msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" failed" error="failed to destroy network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:28.853230 kubelet[2667]: E0913 00:54:28.853007 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:28.853230 kubelet[2667]: E0913 00:54:28.853074 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719"} Sep 13 00:54:28.853230 kubelet[2667]: E0913 00:54:28.853120 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:28.853230 kubelet[2667]: E0913 00:54:28.853165 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce3dd00f-1685-4fb8-a21f-eacbff2544a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rtn9j" podUID="ce3dd00f-1685-4fb8-a21f-eacbff2544a7" Sep 13 00:54:28.853650 env[1564]: time="2025-09-13T00:54:28.853605700Z" level=error msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" failed" error="failed to destroy network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:28.854057 kubelet[2667]: E0913 00:54:28.853876 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:28.854057 kubelet[2667]: E0913 00:54:28.853922 2667 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57"} Sep 13 00:54:28.854057 kubelet[2667]: E0913 00:54:28.853969 2667 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:28.854057 kubelet[2667]: E0913 00:54:28.853999 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" podUID="64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" Sep 13 00:54:29.056589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360420504.mount: Deactivated successfully. Sep 13 00:54:29.103762 env[1564]: time="2025-09-13T00:54:29.103715790Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:29.112952 env[1564]: time="2025-09-13T00:54:29.112900657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:29.118585 env[1564]: time="2025-09-13T00:54:29.118543536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:29.124074 env[1564]: time="2025-09-13T00:54:29.124038116Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:29.124381 env[1564]: time="2025-09-13T00:54:29.124347415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:54:29.147001 env[1564]: time="2025-09-13T00:54:29.146960034Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:54:29.196763 env[1564]: time="2025-09-13T00:54:29.196713054Z" level=info msg="CreateContainer within sandbox \"7b5922af95b3d398207b0527aedfb2ca413d2588f9ffe376ce71dd0c36778cf3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe\"" Sep 13 00:54:29.197472 env[1564]: time="2025-09-13T00:54:29.197292152Z" level=info msg="StartContainer for \"e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe\"" Sep 13 00:54:29.261907 env[1564]: time="2025-09-13T00:54:29.261841720Z" level=info msg="StartContainer for \"e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe\" returns successfully" Sep 13 00:54:29.420633 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:54:29.420789 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:54:29.536626 env[1564]: time="2025-09-13T00:54:29.536586729Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" iface="eth0" netns="/var/run/netns/cni-914fd798-a820-e3ab-1361-e589ccf7c75e" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" iface="eth0" netns="/var/run/netns/cni-914fd798-a820-e3ab-1361-e589ccf7c75e" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" iface="eth0" netns="/var/run/netns/cni-914fd798-a820-e3ab-1361-e589ccf7c75e" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.606 [INFO][3949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.646 [INFO][3956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.647 [INFO][3956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.647 [INFO][3956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.656 [WARNING][3956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.656 [INFO][3956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.657 [INFO][3956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:29.662888 env[1564]: 2025-09-13 00:54:29.661 [INFO][3949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:29.663454 env[1564]: time="2025-09-13T00:54:29.663044673Z" level=info msg="TearDown network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" successfully" Sep 13 00:54:29.663454 env[1564]: time="2025-09-13T00:54:29.663093173Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" returns successfully" Sep 13 00:54:29.769894 kubelet[2667]: I0913 00:54:29.769856 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-ca-bundle\") pod \"87b59c29-b153-48e3-b1f8-09c6220faf33\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " Sep 13 00:54:29.770373 kubelet[2667]: I0913 00:54:29.769910 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-backend-key-pair\") pod \"87b59c29-b153-48e3-b1f8-09c6220faf33\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " Sep 13 00:54:29.770373 kubelet[2667]: I0913 00:54:29.769945 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9rf\" (UniqueName: \"kubernetes.io/projected/87b59c29-b153-48e3-b1f8-09c6220faf33-kube-api-access-5c9rf\") pod \"87b59c29-b153-48e3-b1f8-09c6220faf33\" (UID: \"87b59c29-b153-48e3-b1f8-09c6220faf33\") " Sep 13 00:54:29.771015 kubelet[2667]: I0913 00:54:29.770985 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "87b59c29-b153-48e3-b1f8-09c6220faf33" (UID: "87b59c29-b153-48e3-b1f8-09c6220faf33"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:29.773735 kubelet[2667]: I0913 00:54:29.773708 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b59c29-b153-48e3-b1f8-09c6220faf33-kube-api-access-5c9rf" (OuterVolumeSpecName: "kube-api-access-5c9rf") pod "87b59c29-b153-48e3-b1f8-09c6220faf33" (UID: "87b59c29-b153-48e3-b1f8-09c6220faf33"). InnerVolumeSpecName "kube-api-access-5c9rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:29.773813 kubelet[2667]: I0913 00:54:29.773706 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "87b59c29-b153-48e3-b1f8-09c6220faf33" (UID: "87b59c29-b153-48e3-b1f8-09c6220faf33"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:29.781137 env[1564]: time="2025-09-13T00:54:29.780855748Z" level=info msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" Sep 13 00:54:29.783529 env[1564]: time="2025-09-13T00:54:29.783498239Z" level=info msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" Sep 13 00:54:29.783924 env[1564]: time="2025-09-13T00:54:29.783901437Z" level=info msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" Sep 13 00:54:29.784582 env[1564]: time="2025-09-13T00:54:29.784229636Z" level=info msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" Sep 13 00:54:29.870685 kubelet[2667]: I0913 00:54:29.870640 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9rf\" (UniqueName: \"kubernetes.io/projected/87b59c29-b153-48e3-b1f8-09c6220faf33-kube-api-access-5c9rf\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:54:29.870685 kubelet[2667]: I0913 00:54:29.870694 2667 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:54:29.870915 kubelet[2667]: I0913 00:54:29.870716 2667 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87b59c29-b153-48e3-b1f8-09c6220faf33-whisker-ca-bundle\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:54:29.999443 kubelet[2667]: I0913 00:54:29.999000 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wlxf8" podStartSLOduration=1.5975396910000002 podStartE2EDuration="29.998979962s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="2025-09-13 00:54:00.72413854 +0000 UTC m=+21.100191928" lastFinishedPulling="2025-09-13 00:54:29.125578911 +0000 UTC m=+49.501632199" observedRunningTime="2025-09-13 00:54:29.977097041 +0000 UTC m=+50.353150329" watchObservedRunningTime="2025-09-13 00:54:29.998979962 +0000 UTC m=+50.375033350" Sep 13 00:54:30.053938 systemd[1]: run-netns-cni\x2d914fd798\x2da820\x2de3ab\x2d1361\x2de589ccf7c75e.mount: Deactivated successfully. Sep 13 00:54:30.054112 systemd[1]: var-lib-kubelet-pods-87b59c29\x2db153\x2d48e3\x2db1f8\x2d09c6220faf33-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5c9rf.mount: Deactivated successfully. Sep 13 00:54:30.054251 systemd[1]: var-lib-kubelet-pods-87b59c29\x2db153\x2d48e3\x2db1f8\x2d09c6220faf33-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.865 [INFO][4013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.866 [INFO][4013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" iface="eth0" netns="/var/run/netns/cni-9809f769-6e95-ab59-581c-f18103ef1d8f" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.884 [INFO][4013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" iface="eth0" netns="/var/run/netns/cni-9809f769-6e95-ab59-581c-f18103ef1d8f" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.891 [INFO][4013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" iface="eth0" netns="/var/run/netns/cni-9809f769-6e95-ab59-581c-f18103ef1d8f" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.891 [INFO][4013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:29.891 [INFO][4013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.025 [INFO][4042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.026 [INFO][4042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.026 [INFO][4042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.072 [WARNING][4042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.072 [INFO][4042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.083 [INFO][4042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.093770 env[1564]: 2025-09-13 00:54:30.092 [INFO][4013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:30.099079 systemd[1]: run-netns-cni\x2d9809f769\x2d6e95\x2dab59\x2d581c\x2df18103ef1d8f.mount: Deactivated successfully. Sep 13 00:54:30.100427 env[1564]: time="2025-09-13T00:54:30.100378202Z" level=info msg="TearDown network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" successfully" Sep 13 00:54:30.100554 env[1564]: time="2025-09-13T00:54:30.100537101Z" level=info msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" returns successfully" Sep 13 00:54:30.101696 env[1564]: time="2025-09-13T00:54:30.101663997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-794cx,Uid:0603fd61-ad5b-4bb1-81d5-450dc870214c,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:30.113649 systemd[1]: run-containerd-runc-k8s.io-e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe-runc.z2VpAE.mount: Deactivated successfully. Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.899 [INFO][4011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.899 [INFO][4011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" iface="eth0" netns="/var/run/netns/cni-d072d7aa-fd9e-a05b-9636-beb959322b22" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.900 [INFO][4011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" iface="eth0" netns="/var/run/netns/cni-d072d7aa-fd9e-a05b-9636-beb959322b22" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.901 [INFO][4011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" iface="eth0" netns="/var/run/netns/cni-d072d7aa-fd9e-a05b-9636-beb959322b22" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.901 [INFO][4011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:29.901 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.075 [INFO][4045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.075 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.092 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.131 [WARNING][4045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.133 [INFO][4045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.135 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.148876 env[1564]: 2025-09-13 00:54:30.140 [INFO][4011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:30.153358 systemd[1]: run-netns-cni\x2dd072d7aa\x2dfd9e\x2da05b\x2d9636\x2dbeb959322b22.mount: Deactivated successfully. Sep 13 00:54:30.158464 env[1564]: time="2025-09-13T00:54:30.158418696Z" level=info msg="TearDown network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" successfully" Sep 13 00:54:30.158626 env[1564]: time="2025-09-13T00:54:30.158581296Z" level=info msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" returns successfully" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" iface="eth0" netns="/var/run/netns/cni-0477846e-ae75-30d8-64d5-50e09fef8822" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" iface="eth0" netns="/var/run/netns/cni-0477846e-ae75-30d8-64d5-50e09fef8822" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" iface="eth0" netns="/var/run/netns/cni-0477846e-ae75-30d8-64d5-50e09fef8822" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:29.914 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.138 [INFO][4053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.138 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.139 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.151 [WARNING][4053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.152 [INFO][4053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.154 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.158716 env[1564]: 2025-09-13 00:54:30.157 [INFO][4017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:30.159761 env[1564]: time="2025-09-13T00:54:30.158768695Z" level=info msg="TearDown network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" successfully" Sep 13 00:54:30.159761 env[1564]: time="2025-09-13T00:54:30.158792295Z" level=info msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" returns successfully" Sep 13 00:54:30.160607 env[1564]: time="2025-09-13T00:54:30.160568989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-sqqp6,Uid:befb9c20-74f9-48dc-9181-e5e1cb0477a7,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:30.165501 env[1564]: time="2025-09-13T00:54:30.165471071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skrlz,Uid:1a2d03cb-de38-46e3-bef7-5c63c9032e67,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:30.177592 kubelet[2667]: I0913 00:54:30.177339 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7qr\" (UniqueName: \"kubernetes.io/projected/d3b16c63-b063-4baa-8efb-9b84f8896a23-kube-api-access-vs7qr\") pod \"whisker-7cccf4cd77-447vd\" (UID: \"d3b16c63-b063-4baa-8efb-9b84f8896a23\") " pod="calico-system/whisker-7cccf4cd77-447vd" Sep 13 00:54:30.177592 kubelet[2667]: I0913 00:54:30.177406 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3b16c63-b063-4baa-8efb-9b84f8896a23-whisker-backend-key-pair\") pod \"whisker-7cccf4cd77-447vd\" (UID: \"d3b16c63-b063-4baa-8efb-9b84f8896a23\") " pod="calico-system/whisker-7cccf4cd77-447vd" Sep 13 00:54:30.177592 kubelet[2667]: I0913 00:54:30.177449 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3b16c63-b063-4baa-8efb-9b84f8896a23-whisker-ca-bundle\") pod \"whisker-7cccf4cd77-447vd\" (UID: \"d3b16c63-b063-4baa-8efb-9b84f8896a23\") " pod="calico-system/whisker-7cccf4cd77-447vd" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.949 [INFO][4031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.949 [INFO][4031] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" iface="eth0" netns="/var/run/netns/cni-2c0878b4-e033-adec-c25f-9bdca103d486" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.951 [INFO][4031] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" iface="eth0" netns="/var/run/netns/cni-2c0878b4-e033-adec-c25f-9bdca103d486" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.951 [INFO][4031] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" iface="eth0" netns="/var/run/netns/cni-2c0878b4-e033-adec-c25f-9bdca103d486" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.951 [INFO][4031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:29.952 [INFO][4031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.186 [INFO][4059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.186 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.187 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.201 [WARNING][4059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.202 [INFO][4059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.210 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.216640 env[1564]: 2025-09-13 00:54:30.213 [INFO][4031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:30.216640 env[1564]: time="2025-09-13T00:54:30.215514294Z" level=info msg="TearDown network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" successfully" Sep 13 00:54:30.216640 env[1564]: time="2025-09-13T00:54:30.215549494Z" level=info msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" returns successfully" Sep 13 00:54:30.216640 env[1564]: time="2025-09-13T00:54:30.216224791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c444948d-hspwf,Uid:e0838642-fa34-4ba2-b6e9-33b770e8c2d4,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:30.366029 env[1564]: time="2025-09-13T00:54:30.365971060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cccf4cd77-447vd,Uid:d3b16c63-b063-4baa-8efb-9b84f8896a23,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:30.428356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:30.422138 systemd-networkd[1746]: cali9a39f1fe629: Link UP Sep 13 00:54:30.440280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9a39f1fe629: link becomes ready Sep 13 00:54:30.447949 systemd-networkd[1746]: cali9a39f1fe629: Gained carrier Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.240 [INFO][4093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.256 [INFO][4093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0 coredns-7c65d6cfc9- kube-system 0603fd61-ad5b-4bb1-81d5-450dc870214c 917 0 2025-09-13 00:53:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 coredns-7c65d6cfc9-794cx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a39f1fe629 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.256 [INFO][4093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.325 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" HandleID="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.326 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" HandleID="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"coredns-7c65d6cfc9-794cx", "timestamp":"2025-09-13 00:54:30.325516203 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.326 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.326 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.326 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.339 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.343 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.350 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.352 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.355 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.355 [INFO][4114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.356 [INFO][4114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.382 [INFO][4114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.391 [INFO][4114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.65/26] block=192.168.28.64/26 handle="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.392 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.65/26] handle="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.392 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.510686 env[1564]: 2025-09-13 00:54:30.392 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.65/26] IPv6=[] ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" HandleID="k8s-pod-network.39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.398 [INFO][4093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0603fd61-ad5b-4bb1-81d5-450dc870214c", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"coredns-7c65d6cfc9-794cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a39f1fe629", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.398 [INFO][4093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.65/32] ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.398 [INFO][4093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a39f1fe629 ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.452 [INFO][4093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.465 [INFO][4093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0603fd61-ad5b-4bb1-81d5-450dc870214c", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f", Pod:"coredns-7c65d6cfc9-794cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a39f1fe629", MAC:"62:88:f7:67:fa:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.511583 env[1564]: 2025-09-13 00:54:30.483 [INFO][4093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-794cx" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:30.572683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali441ae136b3d: link becomes ready Sep 13 00:54:30.576419 systemd-networkd[1746]: cali441ae136b3d: Link UP Sep 13 00:54:30.576666 systemd-networkd[1746]: cali441ae136b3d: Gained carrier Sep 13 00:54:30.583468 env[1564]: time="2025-09-13T00:54:30.583401789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:30.583578 env[1564]: time="2025-09-13T00:54:30.583489488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:30.583578 env[1564]: time="2025-09-13T00:54:30.583520988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:30.583718 env[1564]: time="2025-09-13T00:54:30.583686288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f pid=4188 runtime=io.containerd.runc.v2 Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.298 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.316 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0 calico-apiserver-748d86bbf- calico-apiserver befb9c20-74f9-48dc-9181-e5e1cb0477a7 919 0 2025-09-13 00:53:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748d86bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 calico-apiserver-748d86bbf-sqqp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali441ae136b3d [] [] }} ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.316 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.504 [INFO][4133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.504 [INFO][4133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-1677b4f607", "pod":"calico-apiserver-748d86bbf-sqqp6", "timestamp":"2025-09-13 00:54:30.495093202 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.505 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.505 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.505 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.518 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.523 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.528 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.532 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.535 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.535 [INFO][4133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.536 [INFO][4133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.541 [INFO][4133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.550 [INFO][4133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.66/26] block=192.168.28.64/26 handle="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.550 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.66/26] handle="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.551 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.596386 env[1564]: 2025-09-13 00:54:30.551 [INFO][4133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.66/26] IPv6=[] ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.563 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"befb9c20-74f9-48dc-9181-e5e1cb0477a7", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"calico-apiserver-748d86bbf-sqqp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441ae136b3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.563 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.66/32] ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.563 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali441ae136b3d ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.575 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.575 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"befb9c20-74f9-48dc-9181-e5e1cb0477a7", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c", Pod:"calico-apiserver-748d86bbf-sqqp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441ae136b3d", MAC:"56:56:3a:29:a9:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.597316 env[1564]: 2025-09-13 00:54:30.592 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-sqqp6" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:30.693274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif85ea771757: link becomes ready Sep 13 00:54:30.692990 systemd-networkd[1746]: calif85ea771757: Link UP Sep 13 00:54:30.693164 systemd-networkd[1746]: calif85ea771757: Gained carrier Sep 13 00:54:30.704854 env[1564]: time="2025-09-13T00:54:30.704743858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:30.705017 env[1564]: time="2025-09-13T00:54:30.704883958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:30.705017 env[1564]: time="2025-09-13T00:54:30.704914158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:30.705254 env[1564]: time="2025-09-13T00:54:30.705206557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c pid=4237 runtime=io.containerd.runc.v2 Sep 13 00:54:30.726215 env[1564]: time="2025-09-13T00:54:30.726100883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-794cx,Uid:0603fd61-ad5b-4bb1-81d5-450dc870214c,Namespace:kube-system,Attempt:1,} returns sandbox id \"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f\"" Sep 13 00:54:30.729410 env[1564]: time="2025-09-13T00:54:30.729360171Z" level=info msg="CreateContainer within sandbox \"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.409 [INFO][4120] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.441 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0 coredns-7c65d6cfc9- kube-system 1a2d03cb-de38-46e3-bef7-5c63c9032e67 918 0 2025-09-13 00:53:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 coredns-7c65d6cfc9-skrlz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif85ea771757 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.441 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.626 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" HandleID="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.626 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" HandleID="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"coredns-7c65d6cfc9-skrlz", "timestamp":"2025-09-13 00:54:30.621663253 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.626 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.626 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.626 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.645 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.652 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.656 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.658 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.660 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.660 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.662 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3 Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.669 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.676 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.67/26] block=192.168.28.64/26 handle="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.676 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.67/26] handle="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.676 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:30.730086 env[1564]: 2025-09-13 00:54:30.677 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.67/26] IPv6=[] ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" HandleID="k8s-pod-network.4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.685 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a2d03cb-de38-46e3-bef7-5c63c9032e67", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"coredns-7c65d6cfc9-skrlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif85ea771757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.685 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.67/32] ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.685 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif85ea771757 ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.692 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.701 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a2d03cb-de38-46e3-bef7-5c63c9032e67", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3", Pod:"coredns-7c65d6cfc9-skrlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif85ea771757", MAC:"d6:52:d4:b9:47:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:30.730916 env[1564]: 2025-09-13 00:54:30.723 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skrlz" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:30.901482 systemd-networkd[1746]: cali7c7ac925471: Link UP Sep 13 00:54:30.914288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7c7ac925471: link becomes ready Sep 13 00:54:30.918377 systemd-networkd[1746]: cali7c7ac925471: Gained carrier Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.553 [INFO][4130] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.567 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0 calico-kube-controllers-77c444948d- calico-system e0838642-fa34-4ba2-b6e9-33b770e8c2d4 920 0 2025-09-13 00:54:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77c444948d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 calico-kube-controllers-77c444948d-hspwf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c7ac925471 [] [] }} ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.567 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.761 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" HandleID="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.761 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" HandleID="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001c7290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"calico-kube-controllers-77c444948d-hspwf", "timestamp":"2025-09-13 00:54:30.761608157 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.762 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.762 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.762 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.769 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.781 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.871 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.874 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.876 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.876 [INFO][4208] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.878 [INFO][4208] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.883 [INFO][4208] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.897 [INFO][4208] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.68/26] block=192.168.28.64/26 handle="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.897 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.68/26] handle="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.897 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.036267 env[1564]: 2025-09-13 00:54:30.897 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.68/26] IPv6=[] ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" HandleID="k8s-pod-network.84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.899 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0", GenerateName:"calico-kube-controllers-77c444948d-", Namespace:"calico-system", SelfLink:"", UID:"e0838642-fa34-4ba2-b6e9-33b770e8c2d4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c444948d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"calico-kube-controllers-77c444948d-hspwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c7ac925471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.899 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.68/32] ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.899 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c7ac925471 ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.918 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.919 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0", GenerateName:"calico-kube-controllers-77c444948d-", Namespace:"calico-system", SelfLink:"", UID:"e0838642-fa34-4ba2-b6e9-33b770e8c2d4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c444948d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e", Pod:"calico-kube-controllers-77c444948d-hspwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c7ac925471", MAC:"96:29:da:a5:62:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.037118 env[1564]: 2025-09-13 00:54:30.945 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e" Namespace="calico-system" Pod="calico-kube-controllers-77c444948d-hspwf" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:31.075861 systemd[1]: run-netns-cni\x2d2c0878b4\x2de033\x2dadec\x2dc25f\x2d9bdca103d486.mount: Deactivated successfully. Sep 13 00:54:31.076172 systemd[1]: run-netns-cni\x2d0477846e\x2dae75\x2d30d8\x2d64d5\x2d50e09fef8822.mount: Deactivated successfully. Sep 13 00:54:31.085869 env[1564]: time="2025-09-13T00:54:31.075025049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:31.085869 env[1564]: time="2025-09-13T00:54:31.075072649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:31.086877 env[1564]: time="2025-09-13T00:54:31.086821908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:31.088384 env[1564]: time="2025-09-13T00:54:31.088343003Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3 pid=4332 runtime=io.containerd.runc.v2 Sep 13 00:54:31.137964 systemd[1]: run-containerd-runc-k8s.io-4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3-runc.zzGgvw.mount: Deactivated successfully. Sep 13 00:54:31.179503 systemd-networkd[1746]: cali184aee36f15: Link UP Sep 13 00:54:31.190295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali184aee36f15: link becomes ready Sep 13 00:54:31.192304 systemd-networkd[1746]: cali184aee36f15: Gained carrier Sep 13 00:54:31.197000 audit[4389]: AVC avc: denied { write } for pid=4389 comm="tee" name="fd" dev="proc" ino=31930 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.215229 kernel: audit: type=1400 audit(1757724871.197:305): avc: denied { write } for pid=4389 comm="tee" name="fd" dev="proc" ino=31930 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.197000 audit[4389]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc8ad67c3 a2=241 a3=1b6 items=1 ppid=4288 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.247732 kernel: audit: type=1300 audit(1757724871.197:305): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc8ad67c3 a2=241 a3=1b6 items=1 ppid=4288 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.248821 env[1564]: time="2025-09-13T00:54:31.248712443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-sqqp6,Uid:befb9c20-74f9-48dc-9181-e5e1cb0477a7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\"" Sep 13 00:54:31.256507 env[1564]: time="2025-09-13T00:54:31.256468216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.642 [INFO][4163] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.737 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0 whisker-7cccf4cd77- calico-system d3b16c63-b063-4baa-8efb-9b84f8896a23 936 0 2025-09-13 00:54:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cccf4cd77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 whisker-7cccf4cd77-447vd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali184aee36f15 [] [] }} ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.737 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.795 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" HandleID="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.866 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" HandleID="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"whisker-7cccf4cd77-447vd", "timestamp":"2025-09-13 00:54:30.795430837 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.867 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:30.899 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.029 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.040 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.045 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.067 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.086 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.089 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.089 [INFO][4262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.095 [INFO][4262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284 Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.108 [INFO][4262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.133 [INFO][4262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.69/26] block=192.168.28.64/26 handle="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.133 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.69/26] handle="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.133 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:31.258742 env[1564]: 2025-09-13 00:54:31.133 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.69/26] IPv6=[] ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" HandleID="k8s-pod-network.29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.138 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0", GenerateName:"whisker-7cccf4cd77-", Namespace:"calico-system", SelfLink:"", UID:"d3b16c63-b063-4baa-8efb-9b84f8896a23", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cccf4cd77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"whisker-7cccf4cd77-447vd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali184aee36f15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.138 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.69/32] ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.139 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184aee36f15 ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.208 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.208 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0", GenerateName:"whisker-7cccf4cd77-", Namespace:"calico-system", SelfLink:"", UID:"d3b16c63-b063-4baa-8efb-9b84f8896a23", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cccf4cd77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284", Pod:"whisker-7cccf4cd77-447vd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali184aee36f15", MAC:"f2:8e:49:06:f2:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:31.259633 env[1564]: 2025-09-13 00:54:31.252 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284" Namespace="calico-system" Pod="whisker-7cccf4cd77-447vd" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--7cccf4cd77--447vd-eth0" Sep 13 00:54:31.197000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:54:31.289237 kernel: audit: type=1307 audit(1757724871.197:305): cwd="/etc/service/enabled/confd/log" Sep 13 00:54:31.197000 audit: PATH item=0 name="/dev/fd/63" inode=31886 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.316264 kernel: audit: type=1302 audit(1757724871.197:305): item=0 name="/dev/fd/63" inode=31886 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.316319 env[1564]: time="2025-09-13T00:54:31.308017036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skrlz,Uid:1a2d03cb-de38-46e3-bef7-5c63c9032e67,Namespace:kube-system,Attempt:1,} returns sandbox id \"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3\"" Sep 13 00:54:31.316319 env[1564]: time="2025-09-13T00:54:31.315655409Z" level=info msg="CreateContainer within sandbox \"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:31.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.382584 kernel: audit: type=1327 audit(1757724871.197:305): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.382661 kernel: audit: type=1400 audit(1757724871.252:306): avc: denied { write } for pid=4402 comm="tee" name="fd" dev="proc" ino=31313 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.252000 audit[4402]: AVC avc: denied { write } for pid=4402 comm="tee" name="fd" dev="proc" ino=31313 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.252000 audit[4402]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd96087b4 a2=241 a3=1b6 items=1 ppid=4351 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.411573 kernel: audit: type=1300 audit(1757724871.252:306): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd96087b4 a2=241 a3=1b6 items=1 ppid=4351 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.252000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:54:31.439280 kernel: audit: type=1307 audit(1757724871.252:306): cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:54:31.252000 audit: PATH item=0 name="/dev/fd/63" inode=31301 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.461286 kernel: audit: type=1302 audit(1757724871.252:306): item=0 name="/dev/fd/63" inode=31301 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.471071 env[1564]: time="2025-09-13T00:54:31.471027467Z" level=info msg="CreateContainer within sandbox \"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3df8b1c930b7d3b90deb03964f595a642d52914667c8355b307f9e97f41ac533\"" Sep 13 00:54:31.471735 env[1564]: time="2025-09-13T00:54:31.471708765Z" level=info msg="StartContainer for \"3df8b1c930b7d3b90deb03964f595a642d52914667c8355b307f9e97f41ac533\"" Sep 13 00:54:31.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.489450 kernel: audit: type=1327 audit(1757724871.252:306): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.267000 audit[4422]: AVC avc: denied { write } for pid=4422 comm="tee" name="fd" dev="proc" ino=31985 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.267000 audit[4422]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde17b97b3 a2=241 a3=1b6 items=1 ppid=4342 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.267000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:54:31.267000 audit: PATH item=0 name="/dev/fd/63" inode=31982 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.271000 audit[4407]: AVC avc: denied { write } for pid=4407 comm="tee" name="fd" dev="proc" ino=31326 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.271000 audit[4407]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdbefaf7c4 a2=241 a3=1b6 items=1 ppid=4344 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.271000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:54:31.271000 audit: PATH item=0 name="/dev/fd/63" inode=31304 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.297000 audit[4427]: AVC avc: denied { write } for pid=4427 comm="tee" name="fd" dev="proc" ino=31330 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.297000 audit[4427]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4bee97c3 a2=241 a3=1b6 items=1 ppid=4348 pid=4427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.297000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:54:31.297000 audit: PATH item=0 name="/dev/fd/63" inode=31989 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.298000 audit[4438]: AVC avc: denied { write } for pid=4438 comm="tee" name="fd" dev="proc" ino=32013 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.298000 audit[4438]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcc2da7c5 a2=241 a3=1b6 items=1 ppid=4353 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.298000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:54:31.298000 audit: PATH item=0 name="/dev/fd/63" inode=31992 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.321000 audit[4440]: AVC avc: denied { write } for pid=4440 comm="tee" name="fd" dev="proc" ino=31334 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:31.321000 audit[4440]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc51b837c3 a2=241 a3=1b6 items=1 ppid=4346 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.321000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:54:31.321000 audit: PATH item=0 name="/dev/fd/63" inode=31999 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:31.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:31.529932 env[1564]: time="2025-09-13T00:54:31.529869162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:31.530138 env[1564]: time="2025-09-13T00:54:31.530109961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:31.530308 env[1564]: time="2025-09-13T00:54:31.530284360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:31.530602 env[1564]: time="2025-09-13T00:54:31.530563159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e pid=4490 runtime=io.containerd.runc.v2 Sep 13 00:54:31.611846 env[1564]: time="2025-09-13T00:54:31.611714576Z" level=info msg="StartContainer for \"3df8b1c930b7d3b90deb03964f595a642d52914667c8355b307f9e97f41ac533\" returns successfully" Sep 13 00:54:31.632523 env[1564]: time="2025-09-13T00:54:31.632462804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:31.632727 env[1564]: time="2025-09-13T00:54:31.632704303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:31.632843 env[1564]: time="2025-09-13T00:54:31.632819902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:31.633079 env[1564]: time="2025-09-13T00:54:31.633053602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284 pid=4546 runtime=io.containerd.runc.v2 Sep 13 00:54:31.782272 kubelet[2667]: I0913 00:54:31.782239 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b59c29-b153-48e3-b1f8-09c6220faf33" path="/var/lib/kubelet/pods/87b59c29-b153-48e3-b1f8-09c6220faf33/volumes" Sep 13 00:54:31.783211 env[1564]: time="2025-09-13T00:54:31.783162778Z" level=info msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" Sep 13 00:54:31.812373 systemd-networkd[1746]: cali441ae136b3d: Gained IPv6LL Sep 13 00:54:31.873770 env[1564]: time="2025-09-13T00:54:31.873654762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cccf4cd77-447vd,Uid:d3b16c63-b063-4baa-8efb-9b84f8896a23,Namespace:calico-system,Attempt:0,} returns sandbox id \"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284\"" Sep 13 00:54:31.899053 env[1564]: time="2025-09-13T00:54:31.899008573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c444948d-hspwf,Uid:e0838642-fa34-4ba2-b6e9-33b770e8c2d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e\"" Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.928000 audit: BPF prog-id=10 op=LOAD Sep 13 00:54:31.928000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe58d81aa0 a2=98 a3=1fffffffffffffff items=0 ppid=4349 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.928000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:31.930000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.930000 audit: BPF prog-id=11 op=LOAD Sep 13 00:54:31.930000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe58d81980 a2=94 a3=3 items=0 ppid=4349 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.930000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:31.945000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { bpf } for pid=4627 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit: BPF prog-id=12 op=LOAD Sep 13 00:54:31.947000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe58d819c0 a2=94 a3=7ffe58d81ba0 items=0 ppid=4349 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.947000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:31.947000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:54:31.947000 audit[4627]: AVC avc: denied { perfmon } for pid=4627 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.947000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe58d81a90 a2=50 a3=a000000085 items=0 ppid=4349 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.947000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.953000 audit: BPF prog-id=13 op=LOAD Sep 13 00:54:31.953000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe40290080 a2=98 a3=3 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.953000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:31.957000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.957000 audit: BPF prog-id=14 op=LOAD Sep 13 00:54:31.957000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4028fe70 a2=94 a3=54428f items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:31.958000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:31.958000 audit: BPF prog-id=15 op=LOAD Sep 13 00:54:31.958000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4028fea0 a2=94 a3=2 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:31.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:31.961000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:54:31.983700 env[1564]: time="2025-09-13T00:54:31.976359903Z" level=info msg="CreateContainer within sandbox \"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f88baa562f519917b394de65c8e1325aa653b3bdb8c32dbcc3772f4875f40d75\"" Sep 13 00:54:31.983700 env[1564]: time="2025-09-13T00:54:31.977006101Z" level=info msg="StartContainer for \"f88baa562f519917b394de65c8e1325aa653b3bdb8c32dbcc3772f4875f40d75\"" Sep 13 00:54:31.983798 kubelet[2667]: I0913 00:54:31.982379 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-794cx" podStartSLOduration=46.982361982 podStartE2EDuration="46.982361982s" podCreationTimestamp="2025-09-13 00:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:31.981860184 +0000 UTC m=+52.357913572" watchObservedRunningTime="2025-09-13 00:54:31.982361982 +0000 UTC m=+52.358415370" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.931 [INFO][4596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.932 [INFO][4596] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" iface="eth0" netns="/var/run/netns/cni-075bf4d5-70f9-6afd-c615-1afdb1955ae7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.932 [INFO][4596] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" iface="eth0" netns="/var/run/netns/cni-075bf4d5-70f9-6afd-c615-1afdb1955ae7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.932 [INFO][4596] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" iface="eth0" netns="/var/run/netns/cni-075bf4d5-70f9-6afd-c615-1afdb1955ae7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.932 [INFO][4596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:31.932 [INFO][4596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.018 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.018 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.019 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.036 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.036 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.041 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.059951 env[1564]: 2025-09-13 00:54:32.043 [INFO][4596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:32.060729 env[1564]: time="2025-09-13T00:54:32.060682812Z" level=info msg="TearDown network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" successfully" Sep 13 00:54:32.060832 env[1564]: time="2025-09-13T00:54:32.060813012Z" level=info msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" returns successfully" Sep 13 00:54:32.061570 env[1564]: time="2025-09-13T00:54:32.061539609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-kfxxk,Uid:15e9425d-6b94-4324-8278-89ee850f4d55,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:32.063000 audit[4652]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:32.063000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7101d280 a2=0 a3=7ffe7101d26c items=0 ppid=2776 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:32.073000 audit[4652]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:32.073000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe7101d280 a2=0 a3=0 items=0 ppid=2776 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.073000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:32.071160 systemd[1]: run-netns-cni\x2d075bf4d5\x2d70f9\x2d6afd\x2dc615\x2d1afdb1955ae7.mount: Deactivated successfully. Sep 13 00:54:32.106000 audit[4661]: NETFILTER_CFG table=filter:110 family=2 entries=17 op=nft_register_rule pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:32.106000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd2376c10 a2=0 a3=7ffcd2376bfc items=0 ppid=2776 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:32.111000 audit[4661]: NETFILTER_CFG table=nat:111 family=2 entries=35 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:32.111000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcd2376c10 a2=0 a3=7ffcd2376bfc items=0 ppid=2776 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:32.176006 env[1564]: time="2025-09-13T00:54:32.175910116Z" level=info msg="StartContainer for \"f88baa562f519917b394de65c8e1325aa653b3bdb8c32dbcc3772f4875f40d75\" returns successfully" Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit: BPF prog-id=16 op=LOAD Sep 13 00:54:32.230000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4028fd60 a2=94 a3=1 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.230000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.230000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:54:32.230000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.230000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe4028fe30 a2=50 a3=7ffe4028ff10 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.230000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fd70 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe4028fda0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe4028fcb0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fdc0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fda0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fd90 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fdc0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe4028fda0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe4028fdc0 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe4028fd90 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.239000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.239000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe4028fe00 a2=28 a3=0 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe4028fbb0 a2=50 a3=1 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit: BPF prog-id=17 op=LOAD Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe4028fbb0 a2=94 a3=5 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe4028fc60 a2=50 a3=1 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe4028fd80 a2=4 a3=38 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { confidentiality } for pid=4633 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe4028fdd0 a2=94 a3=6 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { confidentiality } for pid=4633 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe4028f580 a2=94 a3=88 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.240000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe4028f580 a2=94 a3=88 items=0 ppid=4349 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit: BPF prog-id=18 op=LOAD Sep 13 00:54:32.249000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab662da0 a2=98 a3=1999999999999999 items=0 ppid=4349 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:32.249000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit: BPF prog-id=19 op=LOAD Sep 13 00:54:32.249000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab662c80 a2=94 a3=ffff items=0 ppid=4349 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:32.249000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { perfmon } for pid=4680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit[4680]: AVC avc: denied { bpf } for pid=4680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.249000 audit: BPF prog-id=20 op=LOAD Sep 13 00:54:32.249000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab662cc0 a2=94 a3=7ffdab662ea0 items=0 ppid=4349 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:32.249000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:54:32.260308 systemd-networkd[1746]: cali7c7ac925471: Gained IPv6LL Sep 13 00:54:32.325362 systemd-networkd[1746]: cali9a39f1fe629: Gained IPv6LL Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.453957 systemd-networkd[1746]: calif85ea771757: Gained IPv6LL Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.447000 audit: BPF prog-id=21 op=LOAD Sep 13 00:54:32.447000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe19832350 a2=98 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.447000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit: BPF prog-id=22 op=LOAD Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe19832160 a2=94 a3=54428f items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit: BPF prog-id=23 op=LOAD Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe19832190 a2=94 a3=2 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe19832060 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe19832090 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe19831fa0 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe198320b0 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe19832090 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe19832080 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe198320b0 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe19832090 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.454000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.454000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe198320b0 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe19832080 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe198320f0 a2=28 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit: BPF prog-id=24 op=LOAD Sep 13 00:54:32.455000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe19831f60 a2=94 a3=0 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.455000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:54:32.455000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.455000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe19831f50 a2=50 a3=2800 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe19831f50 a2=50 a3=2800 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.457000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit: BPF prog-id=25 op=LOAD Sep 13 00:54:32.457000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe19831770 a2=94 a3=2 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.457000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.457000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { perfmon } for pid=4706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit[4706]: AVC avc: denied { bpf } for pid=4706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.457000 audit: BPF prog-id=26 op=LOAD Sep 13 00:54:32.457000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe19831870 a2=94 a3=30 items=0 ppid=4349 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.457000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.461000 audit: BPF prog-id=27 op=LOAD Sep 13 00:54:32.461000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7f111ef0 a2=98 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.461000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit: BPF prog-id=28 op=LOAD Sep 13 00:54:32.462000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f111ce0 a2=94 a3=54428f items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.462000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.462000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.462000 audit: BPF prog-id=29 op=LOAD Sep 13 00:54:32.462000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f111d10 a2=94 a3=2 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.462000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.462000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:54:32.567755 systemd-networkd[1746]: vxlan.calico: Link UP Sep 13 00:54:32.567764 systemd-networkd[1746]: vxlan.calico: Gained carrier Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit: BPF prog-id=30 op=LOAD Sep 13 00:54:32.595000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f111bd0 a2=94 a3=1 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.595000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:54:32.595000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.595000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc7f111ca0 a2=50 a3=7ffc7f111d80 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111be0 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc7f111c10 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc7f111b20 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111c30 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111c10 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111c00 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111c30 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc7f111c10 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc7f111c30 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc7f111c00 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.604000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.604000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc7f111c70 a2=28 a3=0 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc7f111a20 a2=50 a3=1 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit: BPF prog-id=31 op=LOAD Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7f111a20 a2=94 a3=5 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc7f111ad0 a2=50 a3=1 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc7f111bf0 a2=4 a3=38 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { confidentiality } for pid=4711 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc7f111c40 a2=94 a3=6 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { confidentiality } for pid=4711 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc7f1113f0 a2=94 a3=88 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: AVC avc: denied { perfmon } for pid=4711 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.605000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc7f1113f0 a2=94 a3=88 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.606000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.606000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f112e20 a2=10 a3=f8f00800 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.606000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.606000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f112cc0 a2=10 a3=3 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.606000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.606000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f112c60 a2=10 a3=3 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.606000 audit[4711]: AVC avc: denied { bpf } for pid=4711 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:32.606000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f112c60 a2=10 a3=7 items=0 ppid=4349 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:32.614000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:54:32.772359 systemd-networkd[1746]: cali184aee36f15: Gained IPv6LL Sep 13 00:54:32.780701 env[1564]: time="2025-09-13T00:54:32.779837242Z" level=info msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" iface="eth0" netns="/var/run/netns/cni-ccc16b6f-138f-6d5a-f215-a2486f2caf4e" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" iface="eth0" netns="/var/run/netns/cni-ccc16b6f-138f-6d5a-f215-a2486f2caf4e" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" iface="eth0" netns="/var/run/netns/cni-ccc16b6f-138f-6d5a-f215-a2486f2caf4e" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.843 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.894 [INFO][4756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.908 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.908 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.934 [WARNING][4756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.934 [INFO][4756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.935 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:32.940969 env[1564]: 2025-09-13 00:54:32.937 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:32.949494 systemd[1]: run-netns-cni\x2dccc16b6f\x2d138f\x2d6d5a\x2df215\x2da2486f2caf4e.mount: Deactivated successfully. Sep 13 00:54:32.951951 env[1564]: time="2025-09-13T00:54:32.950379956Z" level=info msg="TearDown network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" successfully" Sep 13 00:54:32.951951 env[1564]: time="2025-09-13T00:54:32.950425956Z" level=info msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" returns successfully" Sep 13 00:54:32.952857 env[1564]: time="2025-09-13T00:54:32.952824448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kg2m8,Uid:79f7874d-3642-4f78-9634-0fbf12fe0b02,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:32.962000 audit[4781]: NETFILTER_CFG table=mangle:112 family=2 entries=16 op=nft_register_chain pid=4781 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:32.962000 audit[4781]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc56e8f120 a2=0 a3=7ffc56e8f10c items=0 ppid=4349 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:32.962000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:33.003956 kubelet[2667]: I0913 00:54:33.003894 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-skrlz" podStartSLOduration=48.003870972 podStartE2EDuration="48.003870972s" podCreationTimestamp="2025-09-13 00:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:32.995034403 +0000 UTC m=+53.371087691" watchObservedRunningTime="2025-09-13 00:54:33.003870972 +0000 UTC m=+53.379924260" Sep 13 00:54:33.005000 audit[4790]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.005000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde6633f10 a2=0 a3=7ffde6633efc items=0 ppid=2776 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:33.017000 audit[4790]: NETFILTER_CFG table=nat:114 family=2 entries=44 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:33.017000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde6633f10 a2=0 a3=7ffde6633efc items=0 ppid=2776 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:33.073000 audit[4780]: NETFILTER_CFG table=nat:115 family=2 entries=15 op=nft_register_chain pid=4780 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:33.073000 audit[4780]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe241f89b0 a2=0 a3=7ffe241f899c items=0 ppid=4349 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.073000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:33.080307 systemd-networkd[1746]: cali019e7f326ba: Link UP Sep 13 00:54:33.086960 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali019e7f326ba: link becomes ready Sep 13 00:54:33.087330 systemd-networkd[1746]: cali019e7f326ba: Gained carrier Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.878 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0 calico-apiserver-67c4bc6787- calico-apiserver 15e9425d-6b94-4324-8278-89ee850f4d55 966 0 2025-09-13 00:53:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c4bc6787 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 calico-apiserver-67c4bc6787-kfxxk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali019e7f326ba [] [] }} ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.908 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.984 [INFO][4765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" HandleID="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.985 [INFO][4765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" HandleID="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-1677b4f607", "pod":"calico-apiserver-67c4bc6787-kfxxk", "timestamp":"2025-09-13 00:54:32.984919337 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.985 [INFO][4765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.985 [INFO][4765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:32.985 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.019 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.033 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.042 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.044 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.046 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.046 [INFO][4765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.047 [INFO][4765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38 Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.054 [INFO][4765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.073 [INFO][4765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.70/26] block=192.168.28.64/26 handle="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.073 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.70/26] handle="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.073 [INFO][4765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:33.103027 env[1564]: 2025-09-13 00:54:33.073 [INFO][4765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.70/26] IPv6=[] ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" HandleID="k8s-pod-network.bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.074 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"15e9425d-6b94-4324-8278-89ee850f4d55", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"calico-apiserver-67c4bc6787-kfxxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali019e7f326ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.075 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.70/32] ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.075 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali019e7f326ba ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.088 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.088 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"15e9425d-6b94-4324-8278-89ee850f4d55", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38", Pod:"calico-apiserver-67c4bc6787-kfxxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali019e7f326ba", MAC:"0a:56:c0:68:48:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:33.103741 env[1564]: 2025-09-13 00:54:33.101 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-kfxxk" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:33.117000 audit[4779]: NETFILTER_CFG table=raw:116 family=2 entries=21 op=nft_register_chain pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:33.117000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe89afebc0 a2=0 a3=7ffe89afebac items=0 ppid=4349 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.117000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:33.120000 audit[4784]: NETFILTER_CFG table=filter:117 family=2 entries=222 op=nft_register_chain pid=4784 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:33.120000 audit[4784]: SYSCALL arch=c000003e syscall=46 success=yes exit=129820 a0=3 a1=7ffc84e25c90 a2=0 a3=7ffc84e25c7c items=0 ppid=4349 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.120000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:33.192000 audit[4805]: NETFILTER_CFG table=filter:118 family=2 entries=53 op=nft_register_chain pid=4805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:33.192000 audit[4805]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffd80c00600 a2=0 a3=7ffd80c005ec items=0 ppid=4349 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:33.192000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:33.217981 env[1564]: time="2025-09-13T00:54:33.217913749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:33.217981 env[1564]: time="2025-09-13T00:54:33.217953448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:33.218270 env[1564]: time="2025-09-13T00:54:33.217968748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:33.218542 env[1564]: time="2025-09-13T00:54:33.218499247Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38 pid=4813 runtime=io.containerd.runc.v2 Sep 13 00:54:33.290603 env[1564]: time="2025-09-13T00:54:33.290570603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-kfxxk,Uid:15e9425d-6b94-4324-8278-89ee850f4d55,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38\"" Sep 13 00:54:34.015284 systemd-networkd[1746]: cali2015c03a261: Link UP Sep 13 00:54:34.025733 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:34.025820 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2015c03a261: link becomes ready Sep 13 00:54:34.026367 systemd-networkd[1746]: cali2015c03a261: Gained carrier Sep 13 00:54:34.039000 audit[4874]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:34.039000 audit[4874]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcfa39e560 a2=0 a3=7ffcfa39e54c items=0 ppid=2776 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:34.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.949 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0 csi-node-driver- calico-system 79f7874d-3642-4f78-9634-0fbf12fe0b02 978 0 2025-09-13 00:54:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 csi-node-driver-kg2m8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2015c03a261 [] [] }} ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.950 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.977 [INFO][4864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" HandleID="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.977 [INFO][4864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" HandleID="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d52b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"csi-node-driver-kg2m8", "timestamp":"2025-09-13 00:54:33.977212081 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.977 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.977 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.977 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.984 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.989 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.992 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.993 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.995 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.995 [INFO][4864] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:33.997 [INFO][4864] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2 Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:34.001 [INFO][4864] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:34.008 [INFO][4864] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.71/26] block=192.168.28.64/26 handle="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:34.008 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.71/26] handle="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:34.009 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:34.049687 env[1564]: 2025-09-13 00:54:34.009 [INFO][4864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.71/26] IPv6=[] ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" HandleID="k8s-pod-network.28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.011 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f7874d-3642-4f78-9634-0fbf12fe0b02", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"csi-node-driver-kg2m8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2015c03a261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.011 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.71/32] ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.011 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2015c03a261 ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.030 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.034 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f7874d-3642-4f78-9634-0fbf12fe0b02", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2", Pod:"csi-node-driver-kg2m8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2015c03a261", MAC:"82:70:07:28:5c:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:34.050720 env[1564]: 2025-09-13 00:54:34.046 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2" Namespace="calico-system" Pod="csi-node-driver-kg2m8" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:34.063000 audit[4882]: NETFILTER_CFG table=filter:120 family=2 entries=56 op=nft_register_chain pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:34.063000 audit[4882]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7fff42fa5640 a2=0 a3=7fff42fa562c items=0 ppid=4349 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:34.063000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:34.072000 audit[4874]: NETFILTER_CFG table=nat:121 family=2 entries=56 op=nft_register_chain pid=4874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:34.072000 audit[4874]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcfa39e560 a2=0 a3=7ffcfa39e54c items=0 ppid=2776 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:34.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:34.165834 env[1564]: time="2025-09-13T00:54:34.165760551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:34.165834 env[1564]: time="2025-09-13T00:54:34.165796951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:34.165834 env[1564]: time="2025-09-13T00:54:34.165812651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:34.223688 env[1564]: time="2025-09-13T00:54:34.166163850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2 pid=4891 runtime=io.containerd.runc.v2 Sep 13 00:54:34.256984 systemd[1]: run-containerd-runc-k8s.io-28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2-runc.xphaEG.mount: Deactivated successfully. Sep 13 00:54:34.284558 env[1564]: time="2025-09-13T00:54:34.284512356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kg2m8,Uid:79f7874d-3642-4f78-9634-0fbf12fe0b02,Namespace:calico-system,Attempt:1,} returns sandbox id \"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2\"" Sep 13 00:54:34.500428 systemd-networkd[1746]: vxlan.calico: Gained IPv6LL Sep 13 00:54:34.692520 systemd-networkd[1746]: cali019e7f326ba: Gained IPv6LL Sep 13 00:54:35.844410 systemd-networkd[1746]: cali2015c03a261: Gained IPv6LL Sep 13 00:54:37.451524 env[1564]: time="2025-09-13T00:54:37.451481022Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:37.457609 env[1564]: time="2025-09-13T00:54:37.457569503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:37.462891 env[1564]: time="2025-09-13T00:54:37.462858286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:37.469663 env[1564]: time="2025-09-13T00:54:37.469632764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:37.470076 env[1564]: time="2025-09-13T00:54:37.470044363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:37.471856 env[1564]: time="2025-09-13T00:54:37.471342159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:54:37.472805 env[1564]: time="2025-09-13T00:54:37.472773954Z" level=info msg="CreateContainer within sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:37.505229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994091115.mount: Deactivated successfully. Sep 13 00:54:37.521651 env[1564]: time="2025-09-13T00:54:37.521604199Z" level=info msg="CreateContainer within sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\"" Sep 13 00:54:37.522178 env[1564]: time="2025-09-13T00:54:37.522141297Z" level=info msg="StartContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\"" Sep 13 00:54:37.605357 env[1564]: time="2025-09-13T00:54:37.605299532Z" level=info msg="StartContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" returns successfully" Sep 13 00:54:38.006905 kubelet[2667]: I0913 00:54:38.006830 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748d86bbf-sqqp6" podStartSLOduration=35.791515412 podStartE2EDuration="42.006809554s" podCreationTimestamp="2025-09-13 00:53:56 +0000 UTC" firstStartedPulling="2025-09-13 00:54:31.255867118 +0000 UTC m=+51.631920406" lastFinishedPulling="2025-09-13 00:54:37.47116116 +0000 UTC m=+57.847214548" observedRunningTime="2025-09-13 00:54:38.005671657 +0000 UTC m=+58.381724945" watchObservedRunningTime="2025-09-13 00:54:38.006809554 +0000 UTC m=+58.382862942" Sep 13 00:54:38.044497 kernel: kauditd_printk_skb: 589 callbacks suppressed Sep 13 00:54:38.044637 kernel: audit: type=1325 audit(1757724878.024:424): table=filter:122 family=2 entries=14 op=nft_register_rule pid=4967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:38.024000 audit[4967]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:38.069286 kernel: audit: type=1300 audit(1757724878.024:424): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe35f89f60 a2=0 a3=7ffe35f89f4c items=0 ppid=2776 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.024000 audit[4967]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe35f89f60 a2=0 a3=7ffe35f89f4c items=0 ppid=2776 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.082824 kernel: audit: type=1327 audit(1757724878.024:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:38.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:38.071000 audit[4967]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=4967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:38.121594 kernel: audit: type=1325 audit(1757724878.071:425): table=nat:123 family=2 entries=20 op=nft_register_rule pid=4967 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:38.121714 kernel: audit: type=1300 audit(1757724878.071:425): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe35f89f60 a2=0 a3=7ffe35f89f4c items=0 ppid=2776 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.071000 audit[4967]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe35f89f60 a2=0 a3=7ffe35f89f4c items=0 ppid=2776 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.134027 kernel: audit: type=1327 audit(1757724878.071:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:38.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:38.961138 env[1564]: time="2025-09-13T00:54:38.961087658Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:39.025082 env[1564]: time="2025-09-13T00:54:39.025033758Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:39.031000 audit[4973]: NETFILTER_CFG table=filter:124 family=2 entries=13 op=nft_register_rule pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.044240 kernel: audit: type=1325 audit(1757724879.031:426): table=filter:124 family=2 entries=13 op=nft_register_rule pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.031000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcb4268710 a2=0 a3=7ffcb42686fc items=0 ppid=2776 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:39.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:39.072526 env[1564]: time="2025-09-13T00:54:39.072490511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:39.077565 kernel: audit: type=1300 audit(1757724879.031:426): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcb4268710 a2=0 a3=7ffcb42686fc items=0 ppid=2776 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:39.077671 kernel: audit: type=1327 audit(1757724879.031:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:39.048000 audit[4973]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_register_chain pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.089581 kernel: audit: type=1325 audit(1757724879.048:427): table=nat:125 family=2 entries=27 op=nft_register_chain pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:39.048000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcb4268710 a2=0 a3=7ffcb42686fc items=0 ppid=2776 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:39.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:39.119644 env[1564]: time="2025-09-13T00:54:39.119595865Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:39.120479 env[1564]: time="2025-09-13T00:54:39.120447762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:54:39.121815 env[1564]: time="2025-09-13T00:54:39.121775158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:54:39.123131 env[1564]: time="2025-09-13T00:54:39.123099654Z" level=info msg="CreateContainer within sandbox \"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:54:39.383123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160569056.mount: Deactivated successfully. Sep 13 00:54:39.620569 env[1564]: time="2025-09-13T00:54:39.620523214Z" level=info msg="CreateContainer within sandbox \"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4db21bbe50f719c2235bde2624623e57c94bdb67f2f9d65b8467e4a6f4063204\"" Sep 13 00:54:39.621277 env[1564]: time="2025-09-13T00:54:39.621245412Z" level=info msg="StartContainer for \"4db21bbe50f719c2235bde2624623e57c94bdb67f2f9d65b8467e4a6f4063204\"" Sep 13 00:54:39.653792 systemd[1]: run-containerd-runc-k8s.io-4db21bbe50f719c2235bde2624623e57c94bdb67f2f9d65b8467e4a6f4063204-runc.IZkSU6.mount: Deactivated successfully. Sep 13 00:54:39.708704 env[1564]: time="2025-09-13T00:54:39.705740251Z" level=info msg="StartContainer for \"4db21bbe50f719c2235bde2624623e57c94bdb67f2f9d65b8467e4a6f4063204\" returns successfully" Sep 13 00:54:40.229661 env[1564]: time="2025-09-13T00:54:40.229622239Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.266 [WARNING][5025] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.266 [INFO][5025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.266 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" iface="eth0" netns="" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.266 [INFO][5025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.266 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.289 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.289 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.289 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.295 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.295 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.297 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.299151 env[1564]: 2025-09-13 00:54:40.298 [INFO][5025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.299655 env[1564]: time="2025-09-13T00:54:40.299172326Z" level=info msg="TearDown network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" successfully" Sep 13 00:54:40.299655 env[1564]: time="2025-09-13T00:54:40.299221226Z" level=info msg="StopPodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" returns successfully" Sep 13 00:54:40.299921 env[1564]: time="2025-09-13T00:54:40.299892524Z" level=info msg="RemovePodSandbox for \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:40.299995 env[1564]: time="2025-09-13T00:54:40.299926824Z" level=info msg="Forcibly stopping sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\"" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.329 [WARNING][5047] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.330 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.330 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" iface="eth0" netns="" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.330 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.330 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.350 [INFO][5054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.351 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.351 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.356 [WARNING][5054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.356 [INFO][5054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" HandleID="k8s-pod-network.96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Workload="ci--3510.3.8--n--1677b4f607-k8s-whisker--76f544d5bb--l8svr-eth0" Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.357 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.359715 env[1564]: 2025-09-13 00:54:40.358 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14" Sep 13 00:54:40.360310 env[1564]: time="2025-09-13T00:54:40.359739041Z" level=info msg="TearDown network for sandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" successfully" Sep 13 00:54:40.376334 env[1564]: time="2025-09-13T00:54:40.376292391Z" level=info msg="RemovePodSandbox \"96383fcc49bd3493a452006179d1f95ed82895e7061f627cd089826f5652cd14\" returns successfully" Sep 13 00:54:40.376915 env[1564]: time="2025-09-13T00:54:40.376886489Z" level=info msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.429 [WARNING][5068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a2d03cb-de38-46e3-bef7-5c63c9032e67", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3", Pod:"coredns-7c65d6cfc9-skrlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif85ea771757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.429 [INFO][5068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.429 [INFO][5068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" iface="eth0" netns="" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.429 [INFO][5068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.429 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.449 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.449 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.449 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.454 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.454 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.455 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.457965 env[1564]: 2025-09-13 00:54:40.456 [INFO][5068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.458490 env[1564]: time="2025-09-13T00:54:40.458446640Z" level=info msg="TearDown network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" successfully" Sep 13 00:54:40.458545 env[1564]: time="2025-09-13T00:54:40.458480540Z" level=info msg="StopPodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" returns successfully" Sep 13 00:54:40.459075 env[1564]: time="2025-09-13T00:54:40.459043038Z" level=info msg="RemovePodSandbox for \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" Sep 13 00:54:40.459161 env[1564]: time="2025-09-13T00:54:40.459074938Z" level=info msg="Forcibly stopping sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\"" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.490 [WARNING][5090] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a2d03cb-de38-46e3-bef7-5c63c9032e67", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"4371831a757ce3f4cdbf183bcb6b35e2d39a16d604f22eaa6b4eb1881dd6fff3", Pod:"coredns-7c65d6cfc9-skrlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif85ea771757", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.490 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.491 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" iface="eth0" netns="" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.491 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.491 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.509 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.509 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.509 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.515 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.515 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" HandleID="k8s-pod-network.ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--skrlz-eth0" Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.516 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.518946 env[1564]: 2025-09-13 00:54:40.517 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9" Sep 13 00:54:40.520239 env[1564]: time="2025-09-13T00:54:40.518908455Z" level=info msg="TearDown network for sandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" successfully" Sep 13 00:54:40.721082 env[1564]: time="2025-09-13T00:54:40.721015738Z" level=info msg="RemovePodSandbox \"ae99855805348fdac6a1068cecbf95088d65d0d0dbded03f5a594643f781e3c9\" returns successfully" Sep 13 00:54:40.721730 env[1564]: time="2025-09-13T00:54:40.721695736Z" level=info msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.753 [WARNING][5111] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"befb9c20-74f9-48dc-9181-e5e1cb0477a7", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c", Pod:"calico-apiserver-748d86bbf-sqqp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441ae136b3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.755 [INFO][5111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.755 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" iface="eth0" netns="" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.755 [INFO][5111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.755 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.776 [INFO][5118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.776 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.776 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.781 [WARNING][5118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.781 [INFO][5118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.782 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.785206 env[1564]: 2025-09-13 00:54:40.783 [INFO][5111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.785206 env[1564]: time="2025-09-13T00:54:40.784988843Z" level=info msg="TearDown network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" successfully" Sep 13 00:54:40.785206 env[1564]: time="2025-09-13T00:54:40.785030043Z" level=info msg="StopPodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" returns successfully" Sep 13 00:54:40.786178 env[1564]: time="2025-09-13T00:54:40.786145639Z" level=info msg="RemovePodSandbox for \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" Sep 13 00:54:40.786283 env[1564]: time="2025-09-13T00:54:40.786182739Z" level=info msg="Forcibly stopping sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\"" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.820 [WARNING][5132] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"befb9c20-74f9-48dc-9181-e5e1cb0477a7", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c", Pod:"calico-apiserver-748d86bbf-sqqp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441ae136b3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.820 [INFO][5132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.820 [INFO][5132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" iface="eth0" netns="" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.820 [INFO][5132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.820 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.841 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.842 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.842 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.847 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.847 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" HandleID="k8s-pod-network.8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.848 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:40.850620 env[1564]: 2025-09-13 00:54:40.849 [INFO][5132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db" Sep 13 00:54:40.851225 env[1564]: time="2025-09-13T00:54:40.850658042Z" level=info msg="TearDown network for sandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" successfully" Sep 13 00:54:41.019497 env[1564]: time="2025-09-13T00:54:41.019456428Z" level=info msg="RemovePodSandbox \"8d9847963ba951afc87e2f8412ec0f992c06f9c68d30d9ed47c023f18d1270db\" returns successfully" Sep 13 00:54:41.020148 env[1564]: time="2025-09-13T00:54:41.020117826Z" level=info msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.084 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f7874d-3642-4f78-9634-0fbf12fe0b02", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2", Pod:"csi-node-driver-kg2m8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2015c03a261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.084 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.084 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" iface="eth0" netns="" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.085 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.085 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.120 [INFO][5163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.120 [INFO][5163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.120 [INFO][5163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.127 [WARNING][5163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.127 [INFO][5163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.128 [INFO][5163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.132652 env[1564]: 2025-09-13 00:54:41.130 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.133227 env[1564]: time="2025-09-13T00:54:41.132691087Z" level=info msg="TearDown network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" successfully" Sep 13 00:54:41.133227 env[1564]: time="2025-09-13T00:54:41.132727187Z" level=info msg="StopPodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" returns successfully" Sep 13 00:54:41.133345 env[1564]: time="2025-09-13T00:54:41.133317585Z" level=info msg="RemovePodSandbox for \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" Sep 13 00:54:41.133396 env[1564]: time="2025-09-13T00:54:41.133359785Z" level=info msg="Forcibly stopping sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\"" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.204 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79f7874d-3642-4f78-9634-0fbf12fe0b02", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2", Pod:"csi-node-driver-kg2m8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2015c03a261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.204 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.204 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" iface="eth0" netns="" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.204 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.204 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.247 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.247 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.247 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.254 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.254 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" HandleID="k8s-pod-network.f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Workload="ci--3510.3.8--n--1677b4f607-k8s-csi--node--driver--kg2m8-eth0" Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.256 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.259620 env[1564]: 2025-09-13 00:54:41.257 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11" Sep 13 00:54:41.260706 env[1564]: time="2025-09-13T00:54:41.260657801Z" level=info msg="TearDown network for sandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" successfully" Sep 13 00:54:41.269012 env[1564]: time="2025-09-13T00:54:41.268962176Z" level=info msg="RemovePodSandbox \"f458b098f2974f9e975997ede519674fa64fa916b74df0c4d409651d17d60a11\" returns successfully" Sep 13 00:54:41.269732 env[1564]: time="2025-09-13T00:54:41.269703474Z" level=info msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.344 [WARNING][5202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0603fd61-ad5b-4bb1-81d5-450dc870214c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f", Pod:"coredns-7c65d6cfc9-794cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a39f1fe629", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.344 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.344 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" iface="eth0" netns="" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.344 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.344 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.379 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.379 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.379 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.384 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.384 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.386 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.390968 env[1564]: 2025-09-13 00:54:41.388 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.390968 env[1564]: time="2025-09-13T00:54:41.389789113Z" level=info msg="TearDown network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" successfully" Sep 13 00:54:41.390968 env[1564]: time="2025-09-13T00:54:41.389828112Z" level=info msg="StopPodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" returns successfully" Sep 13 00:54:41.390968 env[1564]: time="2025-09-13T00:54:41.390355311Z" level=info msg="RemovePodSandbox for \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" Sep 13 00:54:41.390968 env[1564]: time="2025-09-13T00:54:41.390442511Z" level=info msg="Forcibly stopping sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\"" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.423 [WARNING][5223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0603fd61-ad5b-4bb1-81d5-450dc870214c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"39cd33fdcd39c166853e7ed70ac5e6fdebfcc0a3ac0407c85d276f265e89c62f", Pod:"coredns-7c65d6cfc9-794cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a39f1fe629", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.423 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.423 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" iface="eth0" netns="" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.423 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.423 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.443 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.443 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.443 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.449 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.449 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" HandleID="k8s-pod-network.53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Workload="ci--3510.3.8--n--1677b4f607-k8s-coredns--7c65d6cfc9--794cx-eth0" Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.450 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.452686 env[1564]: 2025-09-13 00:54:41.451 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010" Sep 13 00:54:41.453299 env[1564]: time="2025-09-13T00:54:41.452724323Z" level=info msg="TearDown network for sandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" successfully" Sep 13 00:54:43.024135 env[1564]: time="2025-09-13T00:54:43.024056832Z" level=info msg="RemovePodSandbox \"53217cf6fa66008f97eb42c6dc8fe913ce3ef18fe24c4a389aa6c890a8343010\" returns successfully" Sep 13 00:54:43.024580 env[1564]: time="2025-09-13T00:54:43.024540731Z" level=info msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.084 [WARNING][5247] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0", GenerateName:"calico-kube-controllers-77c444948d-", Namespace:"calico-system", SelfLink:"", UID:"e0838642-fa34-4ba2-b6e9-33b770e8c2d4", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c444948d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e", Pod:"calico-kube-controllers-77c444948d-hspwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c7ac925471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.084 [INFO][5247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.084 [INFO][5247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" iface="eth0" netns="" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.084 [INFO][5247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.084 [INFO][5247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.127 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.127 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.127 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.134 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.134 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.136 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.139990 env[1564]: 2025-09-13 00:54:43.138 [INFO][5247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.140598 env[1564]: time="2025-09-13T00:54:43.140024492Z" level=info msg="TearDown network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" successfully" Sep 13 00:54:43.140598 env[1564]: time="2025-09-13T00:54:43.140060992Z" level=info msg="StopPodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" returns successfully" Sep 13 00:54:43.140598 env[1564]: time="2025-09-13T00:54:43.140540691Z" level=info msg="RemovePodSandbox for \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" Sep 13 00:54:43.140702 env[1564]: time="2025-09-13T00:54:43.140576791Z" level=info msg="Forcibly stopping sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\"" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.220 [WARNING][5273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0", GenerateName:"calico-kube-controllers-77c444948d-", Namespace:"calico-system", SelfLink:"", UID:"e0838642-fa34-4ba2-b6e9-33b770e8c2d4", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c444948d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e", Pod:"calico-kube-controllers-77c444948d-hspwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c7ac925471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.221 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.221 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" iface="eth0" netns="" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.221 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.221 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.262 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.262 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.262 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.268 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.268 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" HandleID="k8s-pod-network.6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--kube--controllers--77c444948d--hspwf-eth0" Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.286 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.301787 env[1564]: 2025-09-13 00:54:43.294 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539" Sep 13 00:54:43.302486 env[1564]: time="2025-09-13T00:54:43.302429716Z" level=info msg="TearDown network for sandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" successfully" Sep 13 00:54:43.310509 env[1564]: time="2025-09-13T00:54:43.310468592Z" level=info msg="RemovePodSandbox \"6df2861236377f732c5c6509fb261f86df8f14563c1b999d3dd212f7dd115539\" returns successfully" Sep 13 00:54:43.311146 env[1564]: time="2025-09-13T00:54:43.311121390Z" level=info msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.410 [WARNING][5295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"15e9425d-6b94-4324-8278-89ee850f4d55", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38", Pod:"calico-apiserver-67c4bc6787-kfxxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali019e7f326ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.411 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.411 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" iface="eth0" netns="" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.411 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.411 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.518 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.518 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.521 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.530 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.530 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.535 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.540443 env[1564]: 2025-09-13 00:54:43.538 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.541299 env[1564]: time="2025-09-13T00:54:43.541262215Z" level=info msg="TearDown network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" successfully" Sep 13 00:54:43.541383 env[1564]: time="2025-09-13T00:54:43.541367015Z" level=info msg="StopPodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" returns successfully" Sep 13 00:54:43.541904 env[1564]: time="2025-09-13T00:54:43.541875314Z" level=info msg="RemovePodSandbox for \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" Sep 13 00:54:43.541983 env[1564]: time="2025-09-13T00:54:43.541908513Z" level=info msg="Forcibly stopping sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\"" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.644 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"15e9425d-6b94-4324-8278-89ee850f4d55", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38", Pod:"calico-apiserver-67c4bc6787-kfxxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali019e7f326ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.645 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.645 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" iface="eth0" netns="" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.645 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.645 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.677 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.678 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.678 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.693 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.693 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" HandleID="k8s-pod-network.c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--kfxxk-eth0" Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.694 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.699587 env[1564]: 2025-09-13 00:54:43.697 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7" Sep 13 00:54:43.701453 env[1564]: time="2025-09-13T00:54:43.699562751Z" level=info msg="TearDown network for sandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" successfully" Sep 13 00:54:43.716741 env[1564]: time="2025-09-13T00:54:43.716692001Z" level=info msg="RemovePodSandbox \"c29e8a496d9bc8312b9d113d42d479e2156fa222a808e902447d5c6ecb67dcb7\" returns successfully" Sep 13 00:54:43.783361 env[1564]: time="2025-09-13T00:54:43.783305105Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" iface="eth0" netns="/var/run/netns/cni-223ea8b8-cb73-9cee-1038-992b1082079f" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" iface="eth0" netns="/var/run/netns/cni-223ea8b8-cb73-9cee-1038-992b1082079f" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" iface="eth0" netns="/var/run/netns/cni-223ea8b8-cb73-9cee-1038-992b1082079f" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.860 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.895 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.895 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.895 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.902 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.903 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.904 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.908000 env[1564]: 2025-09-13 00:54:43.906 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:54:43.916639 systemd[1]: run-netns-cni\x2d223ea8b8\x2dcb73\x2d9cee\x2d1038\x2d992b1082079f.mount: Deactivated successfully. Sep 13 00:54:43.918432 env[1564]: time="2025-09-13T00:54:43.918371909Z" level=info msg="TearDown network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" successfully" Sep 13 00:54:43.918592 env[1564]: time="2025-09-13T00:54:43.918571909Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" returns successfully" Sep 13 00:54:43.919423 env[1564]: time="2025-09-13T00:54:43.919395206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rtn9j,Uid:ce3dd00f-1685-4fb8-a21f-eacbff2544a7,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:44.165816 systemd-networkd[1746]: calic11030c6e35: Link UP Sep 13 00:54:44.189819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:44.189912 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic11030c6e35: link becomes ready Sep 13 00:54:44.192865 systemd-networkd[1746]: calic11030c6e35: Gained carrier Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.019 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0 goldmane-7988f88666- calico-system ce3dd00f-1685-4fb8-a21f-eacbff2544a7 1038 0 2025-09-13 00:54:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 goldmane-7988f88666-rtn9j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic11030c6e35 [] [] }} ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.019 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.065 [INFO][5369] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" HandleID="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.066 [INFO][5369] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" HandleID="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b74a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-1677b4f607", "pod":"goldmane-7988f88666-rtn9j", "timestamp":"2025-09-13 00:54:44.065779179 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.066 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.066 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.066 [INFO][5369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.080 [INFO][5369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.088 [INFO][5369] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.096 [INFO][5369] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.100 [INFO][5369] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.103 [INFO][5369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.103 [INFO][5369] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.104 [INFO][5369] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54 Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.128 [INFO][5369] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.138 [INFO][5369] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.72/26] block=192.168.28.64/26 handle="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.138 [INFO][5369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.72/26] handle="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.138 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:44.223800 env[1564]: 2025-09-13 00:54:44.138 [INFO][5369] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.72/26] IPv6=[] ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" HandleID="k8s-pod-network.9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.140 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ce3dd00f-1685-4fb8-a21f-eacbff2544a7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"goldmane-7988f88666-rtn9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11030c6e35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.140 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.72/32] ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.140 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic11030c6e35 ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.195 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.196 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ce3dd00f-1685-4fb8-a21f-eacbff2544a7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54", Pod:"goldmane-7988f88666-rtn9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11030c6e35", MAC:"ee:61:4d:ac:08:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:44.225015 env[1564]: 2025-09-13 00:54:44.210 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54" Namespace="calico-system" Pod="goldmane-7988f88666-rtn9j" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:54:44.253837 env[1564]: time="2025-09-13T00:54:44.253769435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:44.254031 env[1564]: time="2025-09-13T00:54:44.254007634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:44.254141 env[1564]: time="2025-09-13T00:54:44.254121934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:44.255453 env[1564]: time="2025-09-13T00:54:44.255413730Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54 pid=5395 runtime=io.containerd.runc.v2 Sep 13 00:54:44.253000 audit[5387]: NETFILTER_CFG table=filter:126 family=2 entries=68 op=nft_register_chain pid=5387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:44.282032 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 13 00:54:44.282282 kernel: audit: type=1325 audit(1757724884.253:428): table=filter:126 family=2 entries=68 op=nft_register_chain pid=5387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:44.253000 audit[5387]: SYSCALL arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7fffa9ad9d40 a2=0 a3=7fffa9ad9d2c items=0 ppid=4349 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.321768 kernel: audit: type=1300 audit(1757724884.253:428): arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7fffa9ad9d40 a2=0 a3=7fffa9ad9d2c items=0 ppid=4349 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.253000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:44.336281 kernel: audit: type=1327 audit(1757724884.253:428): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:44.412751 env[1564]: time="2025-09-13T00:54:44.412716475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rtn9j,Uid:ce3dd00f-1685-4fb8-a21f-eacbff2544a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54\"" Sep 13 00:54:44.779757 env[1564]: time="2025-09-13T00:54:44.779719212Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:54:44.915102 systemd[1]: run-containerd-runc-k8s.io-9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54-runc.CaFIa0.mount: Deactivated successfully. Sep 13 00:54:44.998157 systemd[1]: run-netns-cni\x2de6483474\x2da4d4\x2da79b\x2dbf58\x2da2cca4d27bef.mount: Deactivated successfully. Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.918 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.918 [INFO][5441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="/var/run/netns/cni-e6483474-a4d4-a79b-bf58-a2cca4d27bef" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.919 [INFO][5441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="/var/run/netns/cni-e6483474-a4d4-a79b-bf58-a2cca4d27bef" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.921 [INFO][5441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="/var/run/netns/cni-e6483474-a4d4-a79b-bf58-a2cca4d27bef" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.921 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.921 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.982 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.982 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.982 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.989 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.989 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.990 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:45.019508 env[1564]: 2025-09-13 00:54:44.992 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:54:45.019508 env[1564]: time="2025-09-13T00:54:44.994628390Z" level=info msg="TearDown network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" successfully" Sep 13 00:54:45.019508 env[1564]: time="2025-09-13T00:54:44.994668190Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" returns successfully" Sep 13 00:54:45.019508 env[1564]: time="2025-09-13T00:54:44.995417088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-zd92h,Uid:64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:45.252422 systemd-networkd[1746]: calic11030c6e35: Gained IPv6LL Sep 13 00:54:46.966932 env[1564]: time="2025-09-13T00:54:46.966889585Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.125136 env[1564]: time="2025-09-13T00:54:47.125095042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.219876 env[1564]: time="2025-09-13T00:54:47.219779078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.363194 env[1564]: time="2025-09-13T00:54:47.363147178Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.363513 env[1564]: time="2025-09-13T00:54:47.363483077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:54:47.376422 env[1564]: time="2025-09-13T00:54:47.368825163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:47.387245 env[1564]: time="2025-09-13T00:54:47.387101412Z" level=info msg="CreateContainer within sandbox \"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:54:47.440138 systemd-networkd[1746]: cali2da3051687e: Link UP Sep 13 00:54:47.453215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:47.453375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2da3051687e: link becomes ready Sep 13 00:54:47.454112 systemd-networkd[1746]: cali2da3051687e: Gained carrier Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.345 [INFO][5456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0 calico-apiserver-748d86bbf- calico-apiserver 64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf 1047 0 2025-09-13 00:53:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748d86bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 calico-apiserver-748d86bbf-zd92h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2da3051687e [] [] }} ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.345 [INFO][5456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.396 [INFO][5469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.396 [INFO][5469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000250ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-1677b4f607", "pod":"calico-apiserver-748d86bbf-zd92h", "timestamp":"2025-09-13 00:54:47.396407686 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.396 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.396 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.396 [INFO][5469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.402 [INFO][5469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.407 [INFO][5469] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.412 [INFO][5469] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.413 [INFO][5469] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.415 [INFO][5469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.415 [INFO][5469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.416 [INFO][5469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82 Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.422 [INFO][5469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.432 [INFO][5469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.73/26] block=192.168.28.64/26 handle="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.432 [INFO][5469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.73/26] handle="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.432 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:47.470652 env[1564]: 2025-09-13 00:54:47.432 [INFO][5469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.73/26] IPv6=[] ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.433 [INFO][5456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"calico-apiserver-748d86bbf-zd92h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2da3051687e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.433 [INFO][5456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.73/32] ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.434 [INFO][5456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2da3051687e ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.455 [INFO][5456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.455 [INFO][5456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82", Pod:"calico-apiserver-748d86bbf-zd92h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2da3051687e", MAC:"fa:91:04:00:98:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:47.471573 env[1564]: 2025-09-13 00:54:47.467 [INFO][5456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Namespace="calico-apiserver" Pod="calico-apiserver-748d86bbf-zd92h" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:54:47.483000 audit[5487]: NETFILTER_CFG table=filter:127 family=2 entries=71 op=nft_register_chain pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:47.498209 kernel: audit: type=1325 audit(1757724887.483:429): table=filter:127 family=2 entries=71 op=nft_register_chain pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:47.498322 kernel: audit: type=1300 audit(1757724887.483:429): arch=c000003e syscall=46 success=yes exit=33056 a0=3 a1=7ffc795eb770 a2=0 a3=7ffc795eb75c items=0 ppid=4349 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:47.483000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=33056 a0=3 a1=7ffc795eb770 a2=0 a3=7ffc795eb75c items=0 ppid=4349 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:47.520720 kernel: audit: type=1327 audit(1757724887.483:429): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:47.483000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:47.621491 env[1564]: time="2025-09-13T00:54:47.621409358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:47.621689 env[1564]: time="2025-09-13T00:54:47.621458958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:47.621689 env[1564]: time="2025-09-13T00:54:47.621475358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:47.621689 env[1564]: time="2025-09-13T00:54:47.621607557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82 pid=5496 runtime=io.containerd.runc.v2 Sep 13 00:54:47.680070 env[1564]: time="2025-09-13T00:54:47.680022194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748d86bbf-zd92h,Uid:64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\"" Sep 13 00:54:47.683434 env[1564]: time="2025-09-13T00:54:47.683399885Z" level=info msg="CreateContainer within sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:48.060362 env[1564]: time="2025-09-13T00:54:48.060318735Z" level=info msg="CreateContainer within sandbox \"84cb21780d83253cb49c214393e925b981f3206f61274b088ee680f519be543e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108\"" Sep 13 00:54:48.061741 env[1564]: time="2025-09-13T00:54:48.061082033Z" level=info msg="StartContainer for \"5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108\"" Sep 13 00:54:48.265740 env[1564]: time="2025-09-13T00:54:48.265672669Z" level=info msg="StartContainer for \"5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108\" returns successfully" Sep 13 00:54:48.773973 env[1564]: time="2025-09-13T00:54:48.773931168Z" level=info msg="CreateContainer within sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\"" Sep 13 00:54:48.774524 env[1564]: time="2025-09-13T00:54:48.774485567Z" level=info msg="StartContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\"" Sep 13 00:54:49.092310 systemd-networkd[1746]: cali2da3051687e: Gained IPv6LL Sep 13 00:54:49.123386 env[1564]: time="2025-09-13T00:54:49.123342609Z" level=info msg="StartContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" returns successfully" Sep 13 00:54:49.124691 env[1564]: time="2025-09-13T00:54:49.123873808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.130592 env[1564]: time="2025-09-13T00:54:49.130563989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.144115 env[1564]: time="2025-09-13T00:54:49.144074653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.158228 env[1564]: time="2025-09-13T00:54:49.154929823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.158228 env[1564]: time="2025-09-13T00:54:49.157006217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:49.165911 env[1564]: time="2025-09-13T00:54:49.159814310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:54:49.165911 env[1564]: time="2025-09-13T00:54:49.161023706Z" level=info msg="CreateContainer within sandbox \"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:49.168831 kubelet[2667]: I0913 00:54:49.167829 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77c444948d-hspwf" podStartSLOduration=33.703266883 podStartE2EDuration="49.167798388s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="2025-09-13 00:54:31.900237169 +0000 UTC m=+52.276290557" lastFinishedPulling="2025-09-13 00:54:47.364768774 +0000 UTC m=+67.740822062" observedRunningTime="2025-09-13 00:54:49.14873304 +0000 UTC m=+69.524786328" watchObservedRunningTime="2025-09-13 00:54:49.167798388 +0000 UTC m=+69.543851676" Sep 13 00:54:49.168831 kubelet[2667]: I0913 00:54:49.167955 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748d86bbf-zd92h" podStartSLOduration=52.167949487 podStartE2EDuration="52.167949487s" podCreationTimestamp="2025-09-13 00:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:49.167317689 +0000 UTC m=+69.543370977" watchObservedRunningTime="2025-09-13 00:54:49.167949487 +0000 UTC m=+69.544002775" Sep 13 00:54:49.179606 systemd[1]: run-containerd-runc-k8s.io-c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943-runc.Db8Wuq.mount: Deactivated successfully. Sep 13 00:54:49.212000 audit[5653]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:49.230352 kernel: audit: type=1325 audit(1757724889.212:430): table=filter:128 family=2 entries=12 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:49.212000 audit[5653]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd3cdc59c0 a2=0 a3=7ffd3cdc59ac items=0 ppid=2776 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:49.253209 kernel: audit: type=1300 audit(1757724889.212:430): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd3cdc59c0 a2=0 a3=7ffd3cdc59ac items=0 ppid=2776 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:49.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:49.281923 env[1564]: time="2025-09-13T00:54:49.269473211Z" level=info msg="CreateContainer within sandbox \"bf1908fc8ba0aef904b58e65de0930cbd274966862558774e38d6c21e7599a38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1c0eb25920c82d628ff3e8cfec483f5107d9f80d13fd334bb79e3cc5ebc2000\"" Sep 13 00:54:49.282576 kernel: audit: type=1327 audit(1757724889.212:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:49.282906 env[1564]: time="2025-09-13T00:54:49.282872474Z" level=info msg="StartContainer for \"c1c0eb25920c82d628ff3e8cfec483f5107d9f80d13fd334bb79e3cc5ebc2000\"" Sep 13 00:54:49.264000 audit[5653]: NETFILTER_CFG table=nat:129 family=2 entries=30 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:49.319206 kernel: audit: type=1325 audit(1757724889.264:431): table=nat:129 family=2 entries=30 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:49.264000 audit[5653]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd3cdc59c0 a2=0 a3=7ffd3cdc59ac items=0 ppid=2776 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:49.342562 kernel: audit: type=1300 audit(1757724889.264:431): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd3cdc59c0 a2=0 a3=7ffd3cdc59ac items=0 ppid=2776 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:49.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:49.356668 kernel: audit: type=1327 audit(1757724889.264:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:49.379111 systemd[1]: run-containerd-runc-k8s.io-c1c0eb25920c82d628ff3e8cfec483f5107d9f80d13fd334bb79e3cc5ebc2000-runc.ntMmyx.mount: Deactivated successfully. Sep 13 00:54:49.462802 env[1564]: time="2025-09-13T00:54:49.462751884Z" level=info msg="StartContainer for \"c1c0eb25920c82d628ff3e8cfec483f5107d9f80d13fd334bb79e3cc5ebc2000\" returns successfully" Sep 13 00:54:50.134774 kubelet[2667]: I0913 00:54:50.133987 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:50.203000 audit[5695]: NETFILTER_CFG table=filter:130 family=2 entries=12 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.218262 kernel: audit: type=1325 audit(1757724890.203:432): table=filter:130 family=2 entries=12 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.203000 audit[5695]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffccd97eff0 a2=0 a3=7ffccd97efdc items=0 ppid=2776 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.247627 kernel: audit: type=1300 audit(1757724890.203:432): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffccd97eff0 a2=0 a3=7ffccd97efdc items=0 ppid=2776 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:50.261327 kernel: audit: type=1327 audit(1757724890.203:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:50.218000 audit[5695]: NETFILTER_CFG table=nat:131 family=2 entries=30 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.275206 kernel: audit: type=1325 audit(1757724890.218:433): table=nat:131 family=2 entries=30 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:50.218000 audit[5695]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffccd97eff0 a2=0 a3=7ffccd97efdc items=0 ppid=2776 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.304541 kernel: audit: type=1300 audit(1757724890.218:433): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffccd97eff0 a2=0 a3=7ffccd97efdc items=0 ppid=2776 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:50.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:50.317208 kernel: audit: type=1327 audit(1757724890.218:433): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:50.738702 env[1564]: time="2025-09-13T00:54:50.738658531Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.749375 env[1564]: time="2025-09-13T00:54:50.749337002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.757560 env[1564]: time="2025-09-13T00:54:50.757523080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.761997 env[1564]: time="2025-09-13T00:54:50.761971068Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.762736 env[1564]: time="2025-09-13T00:54:50.762713066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:54:50.764758 env[1564]: time="2025-09-13T00:54:50.764738461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:54:50.765601 env[1564]: time="2025-09-13T00:54:50.765573858Z" level=info msg="CreateContainer within sandbox \"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:54:50.816276 env[1564]: time="2025-09-13T00:54:50.816223922Z" level=info msg="CreateContainer within sandbox \"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7940cbf99b94f89818ce6ed9dcf7c94f448b7c72aaa5d297d433398deaf86b02\"" Sep 13 00:54:50.817315 env[1564]: time="2025-09-13T00:54:50.817283719Z" level=info msg="StartContainer for \"7940cbf99b94f89818ce6ed9dcf7c94f448b7c72aaa5d297d433398deaf86b02\"" Sep 13 00:54:51.005329 env[1564]: time="2025-09-13T00:54:51.005222813Z" level=info msg="StartContainer for \"7940cbf99b94f89818ce6ed9dcf7c94f448b7c72aaa5d297d433398deaf86b02\" returns successfully" Sep 13 00:54:51.137966 kubelet[2667]: I0913 00:54:51.137712 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:54.885658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540542418.mount: Deactivated successfully. Sep 13 00:54:57.870421 env[1564]: time="2025-09-13T00:54:57.870371088Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.876921 env[1564]: time="2025-09-13T00:54:57.876880572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.881127 env[1564]: time="2025-09-13T00:54:57.881092661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.885002 env[1564]: time="2025-09-13T00:54:57.884970051Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.885490 env[1564]: time="2025-09-13T00:54:57.885462750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:54:57.888143 env[1564]: time="2025-09-13T00:54:57.888114143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:54:57.889234 env[1564]: time="2025-09-13T00:54:57.889203241Z" level=info msg="CreateContainer within sandbox \"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:54:57.927334 env[1564]: time="2025-09-13T00:54:57.927283045Z" level=info msg="CreateContainer within sandbox \"29b72c948c4749bbe9aff3eeebc9e82a3f33dfdb8382cb7e781e576a4786e284\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a4fc063cfbf5221ab8de75b55a96830862aeb3f0c8e090acb6c206913a998c9d\"" Sep 13 00:54:57.928084 env[1564]: time="2025-09-13T00:54:57.928055843Z" level=info msg="StartContainer for \"a4fc063cfbf5221ab8de75b55a96830862aeb3f0c8e090acb6c206913a998c9d\"" Sep 13 00:54:58.039295 env[1564]: time="2025-09-13T00:54:58.039252266Z" level=info msg="StartContainer for \"a4fc063cfbf5221ab8de75b55a96830862aeb3f0c8e090acb6c206913a998c9d\" returns successfully" Sep 13 00:54:58.190410 kubelet[2667]: I0913 00:54:58.190266 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cccf4cd77-447vd" podStartSLOduration=2.1792249 podStartE2EDuration="28.190249492s" podCreationTimestamp="2025-09-13 00:54:30 +0000 UTC" firstStartedPulling="2025-09-13 00:54:31.876002954 +0000 UTC m=+52.252056242" lastFinishedPulling="2025-09-13 00:54:57.887027446 +0000 UTC m=+78.263080834" observedRunningTime="2025-09-13 00:54:58.190219392 +0000 UTC m=+78.566272680" watchObservedRunningTime="2025-09-13 00:54:58.190249492 +0000 UTC m=+78.566302780" Sep 13 00:54:58.191076 kubelet[2667]: I0913 00:54:58.191029 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c4bc6787-kfxxk" podStartSLOduration=44.323850577 podStartE2EDuration="1m0.19101529s" podCreationTimestamp="2025-09-13 00:53:58 +0000 UTC" firstStartedPulling="2025-09-13 00:54:33.291762799 +0000 UTC m=+53.667816187" lastFinishedPulling="2025-09-13 00:54:49.158927612 +0000 UTC m=+69.534980900" observedRunningTime="2025-09-13 00:54:50.1560906 +0000 UTC m=+70.532143888" watchObservedRunningTime="2025-09-13 00:54:58.19101529 +0000 UTC m=+78.567068678" Sep 13 00:54:58.206000 audit[5775]: NETFILTER_CFG table=filter:132 family=2 entries=11 op=nft_register_rule pid=5775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:58.206000 audit[5775]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdc401f8b0 a2=0 a3=7ffdc401f89c items=0 ppid=2776 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.245157 kernel: audit: type=1325 audit(1757724898.206:434): table=filter:132 family=2 entries=11 op=nft_register_rule pid=5775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:58.245323 kernel: audit: type=1300 audit(1757724898.206:434): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdc401f8b0 a2=0 a3=7ffdc401f89c items=0 ppid=2776 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:58.247000 audit[5775]: NETFILTER_CFG table=nat:133 family=2 entries=29 op=nft_register_chain pid=5775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:58.271790 kernel: audit: type=1327 audit(1757724898.206:434): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:58.271895 kernel: audit: type=1325 audit(1757724898.247:435): table=nat:133 family=2 entries=29 op=nft_register_chain pid=5775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:58.247000 audit[5775]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffdc401f8b0 a2=0 a3=7ffdc401f89c items=0 ppid=2776 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.310274 kernel: audit: type=1300 audit(1757724898.247:435): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffdc401f8b0 a2=0 a3=7ffdc401f89c items=0 ppid=2776 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.310378 kernel: audit: type=1327 audit(1757724898.247:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:58.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.034183 kubelet[2667]: I0913 00:54:59.033313 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:59.228217 kubelet[2667]: I0913 00:54:59.227425 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:59.252143 env[1564]: time="2025-09-13T00:54:59.252096467Z" level=info msg="StopContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" with timeout 30 (s)" Sep 13 00:54:59.269525 env[1564]: time="2025-09-13T00:54:59.269480024Z" level=info msg="Stop container \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" with signal terminated" Sep 13 00:54:59.365687 kernel: audit: type=1325 audit(1757724899.349:436): table=filter:134 family=2 entries=10 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.349000 audit[5777]: NETFILTER_CFG table=filter:134 family=2 entries=10 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.412767 kernel: audit: type=1300 audit(1757724899.349:436): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff72618d00 a2=0 a3=7fff72618cec items=0 ppid=2776 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:59.349000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff72618d00 a2=0 a3=7fff72618cec items=0 ppid=2776 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:59.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.435328 kernel: audit: type=1327 audit(1757724899.349:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.383000 audit[5777]: NETFILTER_CFG table=nat:135 family=2 entries=36 op=nft_register_chain pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.383000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7fff72618d00 a2=0 a3=7fff72618cec items=0 ppid=2776 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:59.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.454211 kernel: audit: type=1325 audit(1757724899.383:437): table=nat:135 family=2 entries=36 op=nft_register_chain pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.469578 kubelet[2667]: I0913 00:54:59.468420 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4294a5a-6ada-4559-9881-749c35f24eab-calico-apiserver-certs\") pod \"calico-apiserver-67c4bc6787-b8hpw\" (UID: \"f4294a5a-6ada-4559-9881-749c35f24eab\") " pod="calico-apiserver/calico-apiserver-67c4bc6787-b8hpw" Sep 13 00:54:59.469578 kubelet[2667]: I0913 00:54:59.468486 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79z94\" (UniqueName: \"kubernetes.io/projected/f4294a5a-6ada-4559-9881-749c35f24eab-kube-api-access-79z94\") pod \"calico-apiserver-67c4bc6787-b8hpw\" (UID: \"f4294a5a-6ada-4559-9881-749c35f24eab\") " pod="calico-apiserver/calico-apiserver-67c4bc6787-b8hpw" Sep 13 00:54:59.565000 audit[5793]: NETFILTER_CFG table=filter:136 family=2 entries=10 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.565000 audit[5793]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffec78a8ad0 a2=0 a3=7ffec78a8abc items=0 ppid=2776 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:59.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.603000 audit[5793]: NETFILTER_CFG table=nat:137 family=2 entries=38 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:59.603000 audit[5793]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffec78a8ad0 a2=0 a3=7ffec78a8abc items=0 ppid=2776 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:59.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:59.686057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943-rootfs.mount: Deactivated successfully. Sep 13 00:54:59.706694 env[1564]: time="2025-09-13T00:54:59.706647251Z" level=info msg="shim disconnected" id=c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943 Sep 13 00:54:59.706888 env[1564]: time="2025-09-13T00:54:59.706868951Z" level=warning msg="cleaning up after shim disconnected" id=c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943 namespace=k8s.io Sep 13 00:54:59.706995 env[1564]: time="2025-09-13T00:54:59.706982151Z" level=info msg="cleaning up dead shim" Sep 13 00:54:59.728886 env[1564]: time="2025-09-13T00:54:59.728832897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-b8hpw,Uid:f4294a5a-6ada-4559-9881-749c35f24eab,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:59.739754 env[1564]: time="2025-09-13T00:54:59.739712570Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5802 runtime=io.containerd.runc.v2\n" Sep 13 00:55:06.158724 env[1564]: time="2025-09-13T00:55:06.158676989Z" level=info msg="StopContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" returns successfully" Sep 13 00:55:06.159819 env[1564]: time="2025-09-13T00:55:06.159788386Z" level=info msg="StopPodSandbox for \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\"" Sep 13 00:55:06.159934 env[1564]: time="2025-09-13T00:55:06.159870486Z" level=info msg="Container to stop \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:06.167014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82-shm.mount: Deactivated successfully. Sep 13 00:55:06.203271 env[1564]: time="2025-09-13T00:55:06.193664308Z" level=info msg="shim disconnected" id=bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82 Sep 13 00:55:06.203271 env[1564]: time="2025-09-13T00:55:06.193716308Z" level=warning msg="cleaning up after shim disconnected" id=bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82 namespace=k8s.io Sep 13 00:55:06.203271 env[1564]: time="2025-09-13T00:55:06.193728908Z" level=info msg="cleaning up dead shim" Sep 13 00:55:06.196363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82-rootfs.mount: Deactivated successfully. Sep 13 00:55:06.211375 env[1564]: time="2025-09-13T00:55:06.211334267Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5835 runtime=io.containerd.runc.v2\n" Sep 13 00:55:06.862482 systemd-networkd[1746]: cali2da3051687e: Link DOWN Sep 13 00:55:06.862489 systemd-networkd[1746]: cali2da3051687e: Lost carrier Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.861 [INFO][5858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.861 [INFO][5858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" iface="eth0" netns="/var/run/netns/cni-22a3b961-9fee-f344-6cbb-c2a2ac421563" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.861 [INFO][5858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" iface="eth0" netns="/var/run/netns/cni-22a3b961-9fee-f344-6cbb-c2a2ac421563" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.874 [INFO][5858] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" after=12.620671ms iface="eth0" netns="/var/run/netns/cni-22a3b961-9fee-f344-6cbb-c2a2ac421563" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.874 [INFO][5858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.874 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.904 [INFO][5867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.904 [INFO][5867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.904 [INFO][5867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.978 [INFO][5867] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.979 [INFO][5867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.981 [INFO][5867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:06.984232 env[1564]: 2025-09-13 00:55:06.982 [INFO][5858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:06.994264 systemd[1]: run-netns-cni\x2d22a3b961\x2d9fee\x2df344\x2d6cbb\x2dc2a2ac421563.mount: Deactivated successfully. Sep 13 00:55:06.999396 env[1564]: time="2025-09-13T00:55:06.999334247Z" level=info msg="TearDown network for sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" successfully" Sep 13 00:55:06.999598 env[1564]: time="2025-09-13T00:55:06.999570746Z" level=info msg="StopPodSandbox for \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" returns successfully" Sep 13 00:55:07.000277 env[1564]: time="2025-09-13T00:55:07.000250845Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:55:07.028603 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 13 00:55:07.028763 kernel: audit: type=1325 audit(1757724907.008:440): table=filter:138 family=2 entries=67 op=nft_register_rule pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:07.008000 audit[5877]: NETFILTER_CFG table=filter:138 family=2 entries=67 op=nft_register_rule pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:07.008000 audit[5877]: SYSCALL arch=c000003e syscall=46 success=yes exit=11456 a0=3 a1=7ffe8d1cebb0 a2=0 a3=7ffe8d1ceb9c items=0 ppid=4349 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.071282 kernel: audit: type=1300 audit(1757724907.008:440): arch=c000003e syscall=46 success=yes exit=11456 a0=3 a1=7ffe8d1cebb0 a2=0 a3=7ffe8d1ceb9c items=0 ppid=4349 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:07.108310 kernel: audit: type=1327 audit(1757724907.008:440): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:07.132979 kernel: audit: type=1325 audit(1757724907.028:441): table=filter:139 family=2 entries=4 op=nft_unregister_chain pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:07.028000 audit[5877]: NETFILTER_CFG table=filter:139 family=2 entries=4 op=nft_unregister_chain pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:07.174377 kernel: audit: type=1300 audit(1757724907.028:441): arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffe8d1cebb0 a2=0 a3=562c9bf90000 items=0 ppid=4349 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.028000 audit[5877]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffe8d1cebb0 a2=0 a3=562c9bf90000 items=0 ppid=4349 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.028000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:07.232236 kernel: audit: type=1327 audit(1757724907.028:441): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:07.284459 kubelet[2667]: I0913 00:55:07.284407 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.215 [WARNING][5886] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0", GenerateName:"calico-apiserver-748d86bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748d86bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82", Pod:"calico-apiserver-748d86bbf-zd92h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2da3051687e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.216 [INFO][5886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.216 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.216 [INFO][5886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.216 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.334 [INFO][5893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.334 [INFO][5893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.334 [INFO][5893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.346 [WARNING][5893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.346 [INFO][5893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.348 [INFO][5893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:07.352644 env[1564]: 2025-09-13 00:55:07.351 [INFO][5886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:07.353697 env[1564]: time="2025-09-13T00:55:07.353656035Z" level=info msg="TearDown network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" successfully" Sep 13 00:55:07.353777 env[1564]: time="2025-09-13T00:55:07.353758934Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" returns successfully" Sep 13 00:55:07.453346 kubelet[2667]: I0913 00:55:07.437448 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-calico-apiserver-certs\") pod \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" (UID: \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\") " Sep 13 00:55:07.453346 kubelet[2667]: I0913 00:55:07.437510 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8f79\" (UniqueName: \"kubernetes.io/projected/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-kube-api-access-m8f79\") pod \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\" (UID: \"64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf\") " Sep 13 00:55:07.450101 systemd[1]: var-lib-kubelet-pods-64dc81d0\x2dedd0\x2d445d\x2d9bf4\x2d14a4d7ebd8bf-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:55:07.457399 kubelet[2667]: I0913 00:55:07.457358 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" (UID: "64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:07.471882 systemd[1]: var-lib-kubelet-pods-64dc81d0\x2dedd0\x2d445d\x2d9bf4\x2d14a4d7ebd8bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm8f79.mount: Deactivated successfully. Sep 13 00:55:07.478120 kubelet[2667]: I0913 00:55:07.477852 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-kube-api-access-m8f79" (OuterVolumeSpecName: "kube-api-access-m8f79") pod "64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" (UID: "64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf"). InnerVolumeSpecName "kube-api-access-m8f79". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:07.519354 kernel: audit: type=1325 audit(1757724907.499:442): table=filter:140 family=2 entries=10 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:07.499000 audit[5919]: NETFILTER_CFG table=filter:140 family=2 entries=10 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:07.553645 kernel: audit: type=1300 audit(1757724907.499:442): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdfcac6bc0 a2=0 a3=7ffdfcac6bac items=0 ppid=2776 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.499000 audit[5919]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdfcac6bc0 a2=0 a3=7ffdfcac6bac items=0 ppid=2776 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.554354 kubelet[2667]: I0913 00:55:07.554126 2667 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-calico-apiserver-certs\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:55:07.554354 kubelet[2667]: I0913 00:55:07.554159 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8f79\" (UniqueName: \"kubernetes.io/projected/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf-kube-api-access-m8f79\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:55:07.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:07.588621 kernel: audit: type=1327 audit(1757724907.499:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:07.614203 kernel: audit: type=1325 audit(1757724907.526:443): table=nat:141 family=2 entries=38 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:07.526000 audit[5919]: NETFILTER_CFG table=nat:141 family=2 entries=38 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:07.526000 audit[5919]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffdfcac6bc0 a2=0 a3=7ffdfcac6bac items=0 ppid=2776 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:07.644047 systemd-networkd[1746]: cali2d722d6ba34: Link UP Sep 13 00:55:07.659708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d722d6ba34: link becomes ready Sep 13 00:55:07.662672 systemd-networkd[1746]: cali2d722d6ba34: Gained carrier Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.411 [INFO][5897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0 calico-apiserver-67c4bc6787- calico-apiserver f4294a5a-6ada-4559-9881-749c35f24eab 1149 0 2025-09-13 00:54:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c4bc6787 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-1677b4f607 calico-apiserver-67c4bc6787-b8hpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d722d6ba34 [] [] }} ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.411 [INFO][5897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.533 [INFO][5912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" HandleID="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.534 [INFO][5912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" HandleID="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-1677b4f607", "pod":"calico-apiserver-67c4bc6787-b8hpw", "timestamp":"2025-09-13 00:55:07.533449922 +0000 UTC"}, Hostname:"ci-3510.3.8-n-1677b4f607", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.534 [INFO][5912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.534 [INFO][5912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.534 [INFO][5912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-1677b4f607' Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.541 [INFO][5912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.567 [INFO][5912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.589 [INFO][5912] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.591 [INFO][5912] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.595 [INFO][5912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.595 [INFO][5912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.596 [INFO][5912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433 Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.614 [INFO][5912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.628 [INFO][5912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.74/26] block=192.168.28.64/26 handle="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.628 [INFO][5912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.74/26] handle="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" host="ci-3510.3.8-n-1677b4f607" Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.628 [INFO][5912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:07.683443 env[1564]: 2025-09-13 00:55:07.628 [INFO][5912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.74/26] IPv6=[] ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" HandleID="k8s-pod-network.ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.630 [INFO][5897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4294a5a-6ada-4559-9881-749c35f24eab", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"", Pod:"calico-apiserver-67c4bc6787-b8hpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d722d6ba34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.630 [INFO][5897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.74/32] ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.631 [INFO][5897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d722d6ba34 ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.663 [INFO][5897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.663 [INFO][5897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0", GenerateName:"calico-apiserver-67c4bc6787-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4294a5a-6ada-4559-9881-749c35f24eab", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4bc6787", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433", Pod:"calico-apiserver-67c4bc6787-b8hpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d722d6ba34", MAC:"4a:f6:4d:16:60:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:07.684330 env[1564]: 2025-09-13 00:55:07.681 [INFO][5897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433" Namespace="calico-apiserver" Pod="calico-apiserver-67c4bc6787-b8hpw" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--67c4bc6787--b8hpw-eth0" Sep 13 00:55:07.731451 env[1564]: time="2025-09-13T00:55:07.729671573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:07.731670 env[1564]: time="2025-09-13T00:55:07.731637068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:07.731773 env[1564]: time="2025-09-13T00:55:07.731751668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:07.732018 env[1564]: time="2025-09-13T00:55:07.731990067Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433 pid=5936 runtime=io.containerd.runc.v2 Sep 13 00:55:07.870000 audit[5957]: NETFILTER_CFG table=filter:142 family=2 entries=71 op=nft_register_chain pid=5957 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:07.870000 audit[5957]: SYSCALL arch=c000003e syscall=46 success=yes exit=33056 a0=3 a1=7fffcc9d5bf0 a2=0 a3=7fffcc9d5bdc items=0 ppid=4349 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:07.870000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:08.140988 env[1564]: time="2025-09-13T00:55:08.140946432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4bc6787-b8hpw,Uid:f4294a5a-6ada-4559-9881-749c35f24eab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433\"" Sep 13 00:55:08.143950 env[1564]: time="2025-09-13T00:55:08.143914226Z" level=info msg="CreateContainer within sandbox \"ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:55:08.174833 env[1564]: time="2025-09-13T00:55:08.174784655Z" level=info msg="CreateContainer within sandbox \"ade3c56da722432dbff050f92397469c4e450756b7c7bc45c55f9ddda7e91433\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea58d8a8a4ed74394945218b9f083eefbe789059866f7679dbce776f8ea6f147\"" Sep 13 00:55:08.175755 env[1564]: time="2025-09-13T00:55:08.175725053Z" level=info msg="StartContainer for \"ea58d8a8a4ed74394945218b9f083eefbe789059866f7679dbce776f8ea6f147\"" Sep 13 00:55:08.303218 systemd[1]: run-containerd-runc-k8s.io-ea58d8a8a4ed74394945218b9f083eefbe789059866f7679dbce776f8ea6f147-runc.H37xKd.mount: Deactivated successfully. Sep 13 00:55:08.868442 systemd-networkd[1746]: cali2d722d6ba34: Gained IPv6LL Sep 13 00:55:08.916167 env[1564]: time="2025-09-13T00:55:08.916119169Z" level=info msg="StartContainer for \"ea58d8a8a4ed74394945218b9f083eefbe789059866f7679dbce776f8ea6f147\" returns successfully" Sep 13 00:55:09.208617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216851957.mount: Deactivated successfully. Sep 13 00:55:09.375000 audit[6011]: NETFILTER_CFG table=filter:143 family=2 entries=10 op=nft_register_rule pid=6011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:09.375000 audit[6011]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc01c9bd90 a2=0 a3=7ffc01c9bd7c items=0 ppid=2776 pid=6011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:09.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:09.381000 audit[6011]: NETFILTER_CFG table=nat:144 family=2 entries=38 op=nft_register_rule pid=6011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:09.381000 audit[6011]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffc01c9bd90 a2=0 a3=7ffc01c9bd7c items=0 ppid=2776 pid=6011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:09.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:09.782458 kubelet[2667]: I0913 00:55:09.782425 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf" path="/var/lib/kubelet/pods/64dc81d0-edd0-445d-9bf4-14a4d7ebd8bf/volumes" Sep 13 00:55:10.665032 env[1564]: time="2025-09-13T00:55:10.664981230Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.672026 env[1564]: time="2025-09-13T00:55:10.671981215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.676440 env[1564]: time="2025-09-13T00:55:10.676404305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.680647 env[1564]: time="2025-09-13T00:55:10.680609795Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.681334 env[1564]: time="2025-09-13T00:55:10.681261194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:55:10.683344 env[1564]: time="2025-09-13T00:55:10.683318389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:55:10.684568 env[1564]: time="2025-09-13T00:55:10.684540186Z" level=info msg="CreateContainer within sandbox \"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:55:10.741012 env[1564]: time="2025-09-13T00:55:10.740964760Z" level=info msg="CreateContainer within sandbox \"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106\"" Sep 13 00:55:10.742220 env[1564]: time="2025-09-13T00:55:10.742174957Z" level=info msg="StartContainer for \"cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106\"" Sep 13 00:55:10.788240 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.SrFiUH.mount: Deactivated successfully. Sep 13 00:55:10.882044 env[1564]: time="2025-09-13T00:55:10.879353350Z" level=info msg="StartContainer for \"cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106\" returns successfully" Sep 13 00:55:11.332146 kubelet[2667]: I0913 00:55:11.332114 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:11.353325 kubelet[2667]: I0913 00:55:11.352763 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-rtn9j" podStartSLOduration=45.084535774 podStartE2EDuration="1m11.352742394s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="2025-09-13 00:54:44.414131171 +0000 UTC m=+64.790184459" lastFinishedPulling="2025-09-13 00:55:10.682337691 +0000 UTC m=+91.058391079" observedRunningTime="2025-09-13 00:55:11.349548102 +0000 UTC m=+91.725601490" watchObservedRunningTime="2025-09-13 00:55:11.352742394 +0000 UTC m=+91.728795782" Sep 13 00:55:11.353325 kubelet[2667]: I0913 00:55:11.353131 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c4bc6787-b8hpw" podStartSLOduration=12.353118694 podStartE2EDuration="12.353118694s" podCreationTimestamp="2025-09-13 00:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:09.348617491 +0000 UTC m=+89.724670779" watchObservedRunningTime="2025-09-13 00:55:11.353118694 +0000 UTC m=+91.729172082" Sep 13 00:55:11.409000 audit[6060]: NETFILTER_CFG table=filter:145 family=2 entries=10 op=nft_register_rule pid=6060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.409000 audit[6060]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffdd92a40c0 a2=0 a3=7ffdd92a40ac items=0 ppid=2776 pid=6060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.415000 audit[6060]: NETFILTER_CFG table=nat:146 family=2 entries=24 op=nft_register_rule pid=6060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.415000 audit[6060]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffdd92a40c0 a2=0 a3=7ffdd92a40ac items=0 ppid=2776 pid=6060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.729594 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.zSo4Mq.mount: Deactivated successfully. Sep 13 00:55:11.873000 audit[6066]: NETFILTER_CFG table=filter:147 family=2 entries=10 op=nft_register_rule pid=6066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.873000 audit[6066]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc7a301eb0 a2=0 a3=7ffc7a301e9c items=0 ppid=2776 pid=6066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.878000 audit[6066]: NETFILTER_CFG table=nat:148 family=2 entries=42 op=nft_register_chain pid=6066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.878000 audit[6066]: SYSCALL arch=c000003e syscall=46 success=yes exit=13892 a0=3 a1=7ffc7a301eb0 a2=0 a3=7ffc7a301e9c items=0 ppid=2776 pid=6066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.892338 env[1564]: time="2025-09-13T00:55:11.892301394Z" level=info msg="StopContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" with timeout 30 (s)" Sep 13 00:55:11.893356 env[1564]: time="2025-09-13T00:55:11.893324591Z" level=info msg="Stop container \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" with signal terminated" Sep 13 00:55:12.014526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269-rootfs.mount: Deactivated successfully. Sep 13 00:55:12.018087 env[1564]: time="2025-09-13T00:55:12.018027414Z" level=info msg="shim disconnected" id=ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269 Sep 13 00:55:12.018233 env[1564]: time="2025-09-13T00:55:12.018110414Z" level=warning msg="cleaning up after shim disconnected" id=ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269 namespace=k8s.io Sep 13 00:55:12.018233 env[1564]: time="2025-09-13T00:55:12.018124114Z" level=info msg="cleaning up dead shim" Sep 13 00:55:12.047500 env[1564]: time="2025-09-13T00:55:12.047424249Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6087 runtime=io.containerd.runc.v2\n" Sep 13 00:55:12.391460 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.KrcI2N.mount: Deactivated successfully. Sep 13 00:55:13.019578 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:55:13.019756 kernel: audit: type=1325 audit(1757724912.998:451): table=filter:149 family=2 entries=10 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:12.998000 audit[6122]: NETFILTER_CFG table=filter:149 family=2 entries=10 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:13.044679 kernel: audit: type=1300 audit(1757724912.998:451): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe4e790220 a2=0 a3=7ffe4e79020c items=0 ppid=2776 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:12.998000 audit[6122]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe4e790220 a2=0 a3=7ffe4e79020c items=0 ppid=2776 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.058096 kernel: audit: type=1327 audit(1757724912.998:451): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:12.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:13.060000 audit[6122]: NETFILTER_CFG table=nat:150 family=2 entries=42 op=nft_unregister_chain pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:13.075374 kernel: audit: type=1325 audit(1757724913.060:452): table=nat:150 family=2 entries=42 op=nft_unregister_chain pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:13.060000 audit[6122]: SYSCALL arch=c000003e syscall=46 success=yes exit=12132 a0=3 a1=7ffe4e790220 a2=0 a3=7ffe4e79020c items=0 ppid=2776 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.101296 kernel: audit: type=1300 audit(1757724913.060:452): arch=c000003e syscall=46 success=yes exit=12132 a0=3 a1=7ffe4e790220 a2=0 a3=7ffe4e79020c items=0 ppid=2776 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.116370 kernel: audit: type=1327 audit(1757724913.060:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:13.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:13.370675 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.nnq0l5.mount: Deactivated successfully. Sep 13 00:55:13.992804 systemd[1]: run-containerd-runc-k8s.io-5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108-runc.vSCVIR.mount: Deactivated successfully. Sep 13 00:55:14.359137 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.MpxrM8.mount: Deactivated successfully. Sep 13 00:55:14.678450 env[1564]: time="2025-09-13T00:55:14.678405070Z" level=info msg="StopContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" returns successfully" Sep 13 00:55:14.679448 env[1564]: time="2025-09-13T00:55:14.679421968Z" level=info msg="StopPodSandbox for \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\"" Sep 13 00:55:14.679627 env[1564]: time="2025-09-13T00:55:14.679606067Z" level=info msg="Container to stop \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:14.686353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c-shm.mount: Deactivated successfully. Sep 13 00:55:14.788639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c-rootfs.mount: Deactivated successfully. Sep 13 00:55:14.795147 env[1564]: time="2025-09-13T00:55:14.795099416Z" level=info msg="shim disconnected" id=489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c Sep 13 00:55:14.795443 env[1564]: time="2025-09-13T00:55:14.795420215Z" level=warning msg="cleaning up after shim disconnected" id=489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c namespace=k8s.io Sep 13 00:55:14.795519 env[1564]: time="2025-09-13T00:55:14.795507015Z" level=info msg="cleaning up dead shim" Sep 13 00:55:14.840420 env[1564]: time="2025-09-13T00:55:14.840375317Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6201 runtime=io.containerd.runc.v2\n" Sep 13 00:55:15.011802 systemd-networkd[1746]: cali441ae136b3d: Link DOWN Sep 13 00:55:15.011809 systemd-networkd[1746]: cali441ae136b3d: Lost carrier Sep 13 00:55:15.032000 audit[6234]: NETFILTER_CFG table=filter:151 family=2 entries=59 op=nft_register_rule pid=6234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:15.048282 kernel: audit: type=1325 audit(1757724915.032:453): table=filter:151 family=2 entries=59 op=nft_register_rule pid=6234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:15.080332 kernel: audit: type=1300 audit(1757724915.032:453): arch=c000003e syscall=46 success=yes exit=10132 a0=3 a1=7fffb8d518f0 a2=0 a3=7fffb8d518dc items=0 ppid=4349 pid=6234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.032000 audit[6234]: SYSCALL arch=c000003e syscall=46 success=yes exit=10132 a0=3 a1=7fffb8d518f0 a2=0 a3=7fffb8d518dc items=0 ppid=4349 pid=6234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.032000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:15.118206 kernel: audit: type=1327 audit(1757724915.032:453): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:15.033000 audit[6234]: NETFILTER_CFG table=filter:152 family=2 entries=2 op=nft_unregister_chain pid=6234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:15.134253 kernel: audit: type=1325 audit(1757724915.033:454): table=filter:152 family=2 entries=2 op=nft_unregister_chain pid=6234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:15.033000 audit[6234]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffb8d518f0 a2=0 a3=5574ac986000 items=0 ppid=4349 pid=6234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.033000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.010 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.010 [INFO][6223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" iface="eth0" netns="/var/run/netns/cni-a6477d2f-b487-dab2-defb-a60d44b80001" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.011 [INFO][6223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" iface="eth0" netns="/var/run/netns/cni-a6477d2f-b487-dab2-defb-a60d44b80001" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.048 [INFO][6223] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" after=37.547819ms iface="eth0" netns="/var/run/netns/cni-a6477d2f-b487-dab2-defb-a60d44b80001" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.048 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.048 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.155 [INFO][6236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.156 [INFO][6236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.156 [INFO][6236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.205 [INFO][6236] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.205 [INFO][6236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.206 [INFO][6236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:15.209546 env[1564]: 2025-09-13 00:55:15.207 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:15.210392 env[1564]: time="2025-09-13T00:55:15.210352513Z" level=info msg="TearDown network for sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" successfully" Sep 13 00:55:15.210494 env[1564]: time="2025-09-13T00:55:15.210473913Z" level=info msg="StopPodSandbox for \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" returns successfully" Sep 13 00:55:15.217881 systemd[1]: run-netns-cni\x2da6477d2f\x2db487\x2ddab2\x2ddefb\x2da60d44b80001.mount: Deactivated successfully. Sep 13 00:55:15.235596 kubelet[2667]: I0913 00:55:15.234463 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/befb9c20-74f9-48dc-9181-e5e1cb0477a7-calico-apiserver-certs\") pod \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\" (UID: \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\") " Sep 13 00:55:15.235596 kubelet[2667]: I0913 00:55:15.234854 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kzm6\" (UniqueName: \"kubernetes.io/projected/befb9c20-74f9-48dc-9181-e5e1cb0477a7-kube-api-access-5kzm6\") pod \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\" (UID: \"befb9c20-74f9-48dc-9181-e5e1cb0477a7\") " Sep 13 00:55:15.246209 systemd[1]: var-lib-kubelet-pods-befb9c20\x2d74f9\x2d48dc\x2d9181\x2de5e1cb0477a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kzm6.mount: Deactivated successfully. Sep 13 00:55:15.246400 systemd[1]: var-lib-kubelet-pods-befb9c20\x2d74f9\x2d48dc\x2d9181\x2de5e1cb0477a7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:55:15.251784 kubelet[2667]: I0913 00:55:15.251752 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befb9c20-74f9-48dc-9181-e5e1cb0477a7-kube-api-access-5kzm6" (OuterVolumeSpecName: "kube-api-access-5kzm6") pod "befb9c20-74f9-48dc-9181-e5e1cb0477a7" (UID: "befb9c20-74f9-48dc-9181-e5e1cb0477a7"). InnerVolumeSpecName "kube-api-access-5kzm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:15.252521 kubelet[2667]: I0913 00:55:15.252495 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befb9c20-74f9-48dc-9181-e5e1cb0477a7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "befb9c20-74f9-48dc-9181-e5e1cb0477a7" (UID: "befb9c20-74f9-48dc-9181-e5e1cb0477a7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:15.280000 audit[6249]: NETFILTER_CFG table=filter:153 family=2 entries=10 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.280000 audit[6249]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe392a45c0 a2=0 a3=7ffe392a45ac items=0 ppid=2776 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.287000 audit[6249]: NETFILTER_CFG table=nat:154 family=2 entries=38 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:15.287000 audit[6249]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffe392a45c0 a2=0 a3=7ffe392a45ac items=0 ppid=2776 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:15.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:15.335298 kubelet[2667]: I0913 00:55:15.335237 2667 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/befb9c20-74f9-48dc-9181-e5e1cb0477a7-calico-apiserver-certs\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:55:15.335298 kubelet[2667]: I0913 00:55:15.335277 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kzm6\" (UniqueName: \"kubernetes.io/projected/befb9c20-74f9-48dc-9181-e5e1cb0477a7-kube-api-access-5kzm6\") on node \"ci-3510.3.8-n-1677b4f607\" DevicePath \"\"" Sep 13 00:55:15.354495 kubelet[2667]: I0913 00:55:15.342321 2667 scope.go:117] "RemoveContainer" containerID="ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269" Sep 13 00:55:15.366492 env[1564]: time="2025-09-13T00:55:15.366452875Z" level=info msg="RemoveContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\"" Sep 13 00:55:15.379868 env[1564]: time="2025-09-13T00:55:15.379822246Z" level=info msg="RemoveContainer for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" returns successfully" Sep 13 00:55:15.380214 kubelet[2667]: I0913 00:55:15.380181 2667 scope.go:117] "RemoveContainer" containerID="ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269" Sep 13 00:55:15.380625 env[1564]: time="2025-09-13T00:55:15.380472345Z" level=error msg="ContainerStatus for \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\": not found" Sep 13 00:55:15.380752 kubelet[2667]: E0913 00:55:15.380728 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\": not found" containerID="ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269" Sep 13 00:55:15.380823 kubelet[2667]: I0913 00:55:15.380767 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269"} err="failed to get container status \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba5443db8346b6695ca11b53d8b8e56ac468b7b61a3871d0933430265ecf0269\": not found" Sep 13 00:55:15.433358 env[1564]: time="2025-09-13T00:55:15.433314330Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:15.440010 env[1564]: time="2025-09-13T00:55:15.439966016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:15.444205 env[1564]: time="2025-09-13T00:55:15.444162607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:15.448691 env[1564]: time="2025-09-13T00:55:15.448431097Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:15.452859 env[1564]: time="2025-09-13T00:55:15.448835396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:55:15.452859 env[1564]: time="2025-09-13T00:55:15.452167689Z" level=info msg="CreateContainer within sandbox \"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:55:15.484639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368480084.mount: Deactivated successfully. Sep 13 00:55:15.500246 env[1564]: time="2025-09-13T00:55:15.500172885Z" level=info msg="CreateContainer within sandbox \"28a332f4f8bf44c4134b9a62458311db8bb97cb20f27a2c1ce3da39b39d95dc2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"abd1baf40df409563e3924b60a8373b8179c3d8c1b78f3c3262ad84de9d0c5b6\"" Sep 13 00:55:15.501457 env[1564]: time="2025-09-13T00:55:15.501428282Z" level=info msg="StartContainer for \"abd1baf40df409563e3924b60a8373b8179c3d8c1b78f3c3262ad84de9d0c5b6\"" Sep 13 00:55:15.535024 systemd[1]: run-containerd-runc-k8s.io-abd1baf40df409563e3924b60a8373b8179c3d8c1b78f3c3262ad84de9d0c5b6-runc.cV9PsZ.mount: Deactivated successfully. Sep 13 00:55:15.574172 env[1564]: time="2025-09-13T00:55:15.574126125Z" level=info msg="StartContainer for \"abd1baf40df409563e3924b60a8373b8179c3d8c1b78f3c3262ad84de9d0c5b6\" returns successfully" Sep 13 00:55:15.782511 kubelet[2667]: I0913 00:55:15.782113 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befb9c20-74f9-48dc-9181-e5e1cb0477a7" path="/var/lib/kubelet/pods/befb9c20-74f9-48dc-9181-e5e1cb0477a7/volumes" Sep 13 00:55:16.331116 kubelet[2667]: I0913 00:55:16.330763 2667 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:55:16.331116 kubelet[2667]: I0913 00:55:16.330798 2667 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:55:18.127888 systemd[1]: run-containerd-runc-k8s.io-e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe-runc.O9RTTZ.mount: Deactivated successfully. Sep 13 00:55:26.420099 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.7Uqu0p.mount: Deactivated successfully. Sep 13 00:55:31.127124 systemd[1]: Started sshd@7-10.200.4.17:22-10.200.16.10:34692.service. Sep 13 00:55:31.132415 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 13 00:55:31.132504 kernel: audit: type=1130 audit(1757724931.126:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:34692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:31.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:34692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:31.733379 sshd[6333]: Accepted publickey for core from 10.200.16.10 port 34692 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:31.732000 audit[6333]: USER_ACCT pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.740851 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:31.739000 audit[6333]: CRED_ACQ pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.765955 systemd[1]: Started session-10.scope. Sep 13 00:55:31.766941 systemd-logind[1540]: New session 10 of user core. Sep 13 00:55:31.776324 kernel: audit: type=1101 audit(1757724931.732:458): pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.776432 kernel: audit: type=1103 audit(1757724931.739:459): pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.804207 kernel: audit: type=1006 audit(1757724931.739:460): pid=6333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:55:31.739000 audit[6333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc437d2100 a2=3 a3=0 items=0 ppid=1 pid=6333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:31.827219 kernel: audit: type=1300 audit(1757724931.739:460): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc437d2100 a2=3 a3=0 items=0 ppid=1 pid=6333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:31.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:31.780000 audit[6333]: USER_START pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.858748 kernel: audit: type=1327 audit(1757724931.739:460): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:31.858874 kernel: audit: type=1105 audit(1757724931.780:461): pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.782000 audit[6336]: CRED_ACQ pid=6336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:31.879296 kernel: audit: type=1103 audit(1757724931.782:462): pid=6336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:32.291984 sshd[6333]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:32.291000 audit[6333]: USER_END pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:32.319213 kernel: audit: type=1106 audit(1757724932.291:463): pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:32.320655 systemd[1]: sshd@7-10.200.4.17:22-10.200.16.10:34692.service: Deactivated successfully. Sep 13 00:55:32.321950 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:55:32.322516 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:55:32.323446 systemd-logind[1540]: Removed session 10. Sep 13 00:55:32.317000 audit[6333]: CRED_DISP pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:32.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:34692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:32.361280 kernel: audit: type=1104 audit(1757724932.317:464): pid=6333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:37.413688 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:37.413838 kernel: audit: type=1130 audit(1757724937.391:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:34708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:37.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:34708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:37.391156 systemd[1]: Started sshd@8-10.200.4.17:22-10.200.16.10:34708.service. Sep 13 00:55:37.997000 audit[6348]: USER_ACCT pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.018105 sshd[6348]: Accepted publickey for core from 10.200.16.10 port 34708 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:38.018464 kernel: audit: type=1101 audit(1757724937.997:467): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.018659 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:38.017000 audit[6348]: CRED_ACQ pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.025360 systemd-logind[1540]: New session 11 of user core. Sep 13 00:55:38.026342 systemd[1]: Started session-11.scope. Sep 13 00:55:38.040547 kernel: audit: type=1103 audit(1757724938.017:468): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.040650 kernel: audit: type=1006 audit(1757724938.017:469): pid=6348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 13 00:55:38.017000 audit[6348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8070e870 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:38.052208 kernel: audit: type=1300 audit(1757724938.017:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8070e870 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:38.017000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:38.077165 kernel: audit: type=1327 audit(1757724938.017:469): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:38.077292 kernel: audit: type=1105 audit(1757724938.033:470): pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.033000 audit[6348]: USER_START pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.040000 audit[6351]: CRED_ACQ pid=6351 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.114970 kernel: audit: type=1103 audit(1757724938.040:471): pid=6351 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.497295 sshd[6348]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:38.498000 audit[6348]: USER_END pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.500162 systemd[1]: sshd@8-10.200.4.17:22-10.200.16.10:34708.service: Deactivated successfully. Sep 13 00:55:38.501080 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:55:38.507550 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:55:38.508632 systemd-logind[1540]: Removed session 11. Sep 13 00:55:38.498000 audit[6348]: CRED_DISP pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.521203 kernel: audit: type=1106 audit(1757724938.498:472): pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.521255 kernel: audit: type=1104 audit(1757724938.498:473): pid=6348 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:38.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:34708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:43.595161 systemd[1]: Started sshd@9-10.200.4.17:22-10.200.16.10:44438.service. Sep 13 00:55:43.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:44438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:43.601285 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:43.601370 kernel: audit: type=1130 audit(1757724943.595:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:44438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:43.719395 kubelet[2667]: I0913 00:55:43.719366 2667 scope.go:117] "RemoveContainer" containerID="c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943" Sep 13 00:55:43.720818 env[1564]: time="2025-09-13T00:55:43.720775311Z" level=info msg="RemoveContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\"" Sep 13 00:55:43.728970 env[1564]: time="2025-09-13T00:55:43.728932698Z" level=info msg="RemoveContainer for \"c1d0ab4ae72fcd6f83a08a07dc855a5e02019036fd43dc4d3b4e472b0fc46943\" returns successfully" Sep 13 00:55:43.730205 env[1564]: time="2025-09-13T00:55:43.730161466Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.764 [WARNING][6373] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ce3dd00f-1685-4fb8-a21f-eacbff2544a7", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54", Pod:"goldmane-7988f88666-rtn9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11030c6e35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.764 [INFO][6373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.764 [INFO][6373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" iface="eth0" netns="" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.764 [INFO][6373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.764 [INFO][6373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.787 [INFO][6380] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.787 [INFO][6380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.787 [INFO][6380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.802 [WARNING][6380] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.802 [INFO][6380] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.803 [INFO][6380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:43.805664 env[1564]: 2025-09-13 00:55:43.804 [INFO][6373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.806364 env[1564]: time="2025-09-13T00:55:43.805712997Z" level=info msg="TearDown network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" successfully" Sep 13 00:55:43.806364 env[1564]: time="2025-09-13T00:55:43.805751196Z" level=info msg="StopPodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" returns successfully" Sep 13 00:55:43.806613 env[1564]: time="2025-09-13T00:55:43.806574575Z" level=info msg="RemovePodSandbox for \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:55:43.806682 env[1564]: time="2025-09-13T00:55:43.806615774Z" level=info msg="Forcibly stopping sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\"" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.839 [WARNING][6395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ce3dd00f-1685-4fb8-a21f-eacbff2544a7", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-1677b4f607", ContainerID:"9fa03f00c6a3c1bb611c1ba11c9733309868c5f239b6ee04b123b4d849622c54", Pod:"goldmane-7988f88666-rtn9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11030c6e35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.839 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.839 [INFO][6395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" iface="eth0" netns="" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.839 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.839 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.859 [INFO][6402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.860 [INFO][6402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.860 [INFO][6402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.865 [WARNING][6402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.865 [INFO][6402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" HandleID="k8s-pod-network.cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Workload="ci--3510.3.8--n--1677b4f607-k8s-goldmane--7988f88666--rtn9j-eth0" Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.866 [INFO][6402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:43.869725 env[1564]: 2025-09-13 00:55:43.867 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719" Sep 13 00:55:43.869725 env[1564]: time="2025-09-13T00:55:43.868822953Z" level=info msg="TearDown network for sandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" successfully" Sep 13 00:55:43.879864 env[1564]: time="2025-09-13T00:55:43.879817666Z" level=info msg="RemovePodSandbox \"cae4c4579b172f5781792db7fe167d5a1faf8a5e3e30e2633abdfc391a32e719\" returns successfully" Sep 13 00:55:43.880399 env[1564]: time="2025-09-13T00:55:43.880362952Z" level=info msg="StopPodSandbox for \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\"" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.910 [WARNING][6416] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.911 [INFO][6416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.911 [INFO][6416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" iface="eth0" netns="" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.911 [INFO][6416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.911 [INFO][6416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.932 [INFO][6423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.932 [INFO][6423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.932 [INFO][6423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.937 [WARNING][6423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.937 [INFO][6423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.938 [INFO][6423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:43.940953 env[1564]: 2025-09-13 00:55:43.939 [INFO][6416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:43.941485 env[1564]: time="2025-09-13T00:55:43.941007772Z" level=info msg="TearDown network for sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" successfully" Sep 13 00:55:43.941485 env[1564]: time="2025-09-13T00:55:43.941053771Z" level=info msg="StopPodSandbox for \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" returns successfully" Sep 13 00:55:43.941794 env[1564]: time="2025-09-13T00:55:43.941762452Z" level=info msg="RemovePodSandbox for \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\"" Sep 13 00:55:43.941884 env[1564]: time="2025-09-13T00:55:43.941802251Z" level=info msg="Forcibly stopping sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\"" Sep 13 00:55:44.007257 systemd[1]: run-containerd-runc-k8s.io-5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108-runc.i6RlPT.mount: Deactivated successfully. Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:43.984 [WARNING][6437] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:43.984 [INFO][6437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:43.984 [INFO][6437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" iface="eth0" netns="" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:43.984 [INFO][6437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:43.984 [INFO][6437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.025 [INFO][6452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.025 [INFO][6452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.025 [INFO][6452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.033 [WARNING][6452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.033 [INFO][6452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" HandleID="k8s-pod-network.489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--sqqp6-eth0" Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.035 [INFO][6452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:44.044927 env[1564]: 2025-09-13 00:55:44.037 [INFO][6437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c" Sep 13 00:55:44.044927 env[1564]: time="2025-09-13T00:55:44.043513315Z" level=info msg="TearDown network for sandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" successfully" Sep 13 00:55:44.064469 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.SLrsHm.mount: Deactivated successfully. Sep 13 00:55:44.080138 env[1564]: time="2025-09-13T00:55:44.080065175Z" level=info msg="RemovePodSandbox \"489cab53a025f88d66d60ceb51ed0d5ae3b5efb57962dc06e16900c45d61b79c\" returns successfully" Sep 13 00:55:44.080719 env[1564]: time="2025-09-13T00:55:44.080689159Z" level=info msg="StopPodSandbox for \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\"" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.134 [WARNING][6491] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.134 [INFO][6491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.134 [INFO][6491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" iface="eth0" netns="" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.134 [INFO][6491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.134 [INFO][6491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.162 [INFO][6503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.163 [INFO][6503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.163 [INFO][6503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.170 [WARNING][6503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.170 [INFO][6503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.171 [INFO][6503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:44.175803 env[1564]: 2025-09-13 00:55:44.174 [INFO][6491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.175803 env[1564]: time="2025-09-13T00:55:44.175773213Z" level=info msg="TearDown network for sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" successfully" Sep 13 00:55:44.176356 env[1564]: time="2025-09-13T00:55:44.175805912Z" level=info msg="StopPodSandbox for \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" returns successfully" Sep 13 00:55:44.176527 env[1564]: time="2025-09-13T00:55:44.176499594Z" level=info msg="RemovePodSandbox for \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\"" Sep 13 00:55:44.176585 env[1564]: time="2025-09-13T00:55:44.176541893Z" level=info msg="Forcibly stopping sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\"" Sep 13 00:55:44.199432 kubelet[2667]: I0913 00:55:44.198666 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kg2m8" podStartSLOduration=63.034161583 podStartE2EDuration="1m44.198633725s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="2025-09-13 00:54:34.285969551 +0000 UTC m=+54.662022939" lastFinishedPulling="2025-09-13 00:55:15.450441693 +0000 UTC m=+95.826495081" observedRunningTime="2025-09-13 00:55:16.371862502 +0000 UTC m=+96.747915790" watchObservedRunningTime="2025-09-13 00:55:44.198633725 +0000 UTC m=+124.574687413" Sep 13 00:55:44.216000 audit[6363]: USER_ACCT pid=6363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.240383 sshd[6363]: Accepted publickey for core from 10.200.16.10 port 44438 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:44.235002 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:44.241417 kernel: audit: type=1101 audit(1757724944.216:476): pid=6363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.228000 audit[6526]: NETFILTER_CFG table=filter:155 family=2 entries=9 op=nft_register_rule pid=6526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:44.246912 systemd[1]: Started session-12.scope. Sep 13 00:55:44.259304 kernel: audit: type=1325 audit(1757724944.228:477): table=filter:155 family=2 entries=9 op=nft_register_rule pid=6526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:44.247885 systemd-logind[1540]: New session 12 of user core. Sep 13 00:55:44.228000 audit[6526]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffc08854e0 a2=0 a3=7fffc08854cc items=0 ppid=2776 pid=6526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:44.298719 kernel: audit: type=1300 audit(1757724944.228:477): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffc08854e0 a2=0 a3=7fffc08854cc items=0 ppid=2776 pid=6526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.298801 kernel: audit: type=1327 audit(1757724944.228:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:44.233000 audit[6363]: CRED_ACQ pid=6363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.321219 kernel: audit: type=1103 audit(1757724944.233:478): pid=6363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.336813 kernel: audit: type=1006 audit(1757724944.233:479): pid=6363 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 13 00:55:44.233000 audit[6363]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4f08b0f0 a2=3 a3=0 items=0 ppid=1 pid=6363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.277 [WARNING][6519] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.277 [INFO][6519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.277 [INFO][6519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" iface="eth0" netns="" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.277 [INFO][6519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.277 [INFO][6519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.330 [INFO][6530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.331 [INFO][6530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.331 [INFO][6530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.339 [WARNING][6530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.339 [INFO][6530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" HandleID="k8s-pod-network.bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.340 [INFO][6530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:44.345539 env[1564]: 2025-09-13 00:55:44.344 [INFO][6519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82" Sep 13 00:55:44.346078 env[1564]: time="2025-09-13T00:55:44.346037133Z" level=info msg="TearDown network for sandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" successfully" Sep 13 00:55:44.364199 kernel: audit: type=1300 audit(1757724944.233:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4f08b0f0 a2=3 a3=0 items=0 ppid=1 pid=6363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.364327 kernel: audit: type=1327 audit(1757724944.233:479): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:44.233000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:44.364401 env[1564]: time="2025-09-13T00:55:44.364125467Z" level=info msg="RemovePodSandbox \"bdc6b77ed0ff961e49ec4f4ecf8fcb9080181c5d09f4237d1d69deac24ce7f82\" returns successfully" Sep 13 00:55:44.364975 env[1564]: time="2025-09-13T00:55:44.364950846Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:55:44.260000 audit[6526]: NETFILTER_CFG table=nat:156 family=2 entries=31 op=nft_register_chain pid=6526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:44.378253 kernel: audit: type=1325 audit(1757724944.260:480): table=nat:156 family=2 entries=31 op=nft_register_chain pid=6526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:44.260000 audit[6526]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fffc08854e0 a2=0 a3=7fffc08854cc items=0 ppid=2776 pid=6526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:44.261000 audit[6363]: USER_START pid=6363 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.262000 audit[6528]: CRED_ACQ pid=6528 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.414 [WARNING][6545] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.414 [INFO][6545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.414 [INFO][6545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.414 [INFO][6545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.414 [INFO][6545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.434 [INFO][6552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.435 [INFO][6552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.435 [INFO][6552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.440 [WARNING][6552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.440 [INFO][6552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.441 [INFO][6552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:44.445169 env[1564]: 2025-09-13 00:55:44.442 [INFO][6545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.445169 env[1564]: time="2025-09-13T00:55:44.444108510Z" level=info msg="TearDown network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" successfully" Sep 13 00:55:44.445169 env[1564]: time="2025-09-13T00:55:44.444146909Z" level=info msg="StopPodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" returns successfully" Sep 13 00:55:44.445796 env[1564]: time="2025-09-13T00:55:44.445761067Z" level=info msg="RemovePodSandbox for \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:55:44.445862 env[1564]: time="2025-09-13T00:55:44.445800966Z" level=info msg="Forcibly stopping sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\"" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.477 [WARNING][6566] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" WorkloadEndpoint="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.477 [INFO][6566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.477 [INFO][6566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" iface="eth0" netns="" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.477 [INFO][6566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.477 [INFO][6566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.497 [INFO][6573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.498 [INFO][6573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.498 [INFO][6573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.503 [WARNING][6573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.503 [INFO][6573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" HandleID="k8s-pod-network.bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Workload="ci--3510.3.8--n--1677b4f607-k8s-calico--apiserver--748d86bbf--zd92h-eth0" Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.504 [INFO][6573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:44.506722 env[1564]: 2025-09-13 00:55:44.505 [INFO][6566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57" Sep 13 00:55:44.507290 env[1564]: time="2025-09-13T00:55:44.506768798Z" level=info msg="TearDown network for sandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" successfully" Sep 13 00:55:44.514379 env[1564]: time="2025-09-13T00:55:44.514332803Z" level=info msg="RemovePodSandbox \"bd590fcee49d59bcf105fcfb8e5a3311e721c818cd8524d3df2d6540dfe2ce57\" returns successfully" Sep 13 00:55:44.730870 sshd[6363]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:44.731000 audit[6363]: USER_END pid=6363 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.731000 audit[6363]: CRED_DISP pid=6363 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:44.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:44438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:44.734174 systemd[1]: sshd@9-10.200.4.17:22-10.200.16.10:44438.service: Deactivated successfully. Sep 13 00:55:44.735396 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:55:44.735981 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:55:44.737303 systemd-logind[1540]: Removed session 12. Sep 13 00:55:48.111028 systemd[1]: run-containerd-runc-k8s.io-e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe-runc.N6xbAM.mount: Deactivated successfully. Sep 13 00:55:49.855657 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:55:49.855807 kernel: audit: type=1130 audit(1757724949.827:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.17:22-10.200.16.10:44450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.17:22-10.200.16.10:44450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.828477 systemd[1]: Started sshd@10-10.200.4.17:22-10.200.16.10:44450.service. Sep 13 00:55:50.424000 audit[6609]: USER_ACCT pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.425568 sshd[6609]: Accepted publickey for core from 10.200.16.10 port 44450 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:50.444000 audit[6609]: CRED_ACQ pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.446563 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:50.451316 systemd-logind[1540]: New session 13 of user core. Sep 13 00:55:50.453035 systemd[1]: Started session-13.scope. Sep 13 00:55:50.467476 kernel: audit: type=1101 audit(1757724950.424:487): pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.467572 kernel: audit: type=1103 audit(1757724950.444:488): pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.467601 kernel: audit: type=1006 audit(1757724950.444:489): pid=6609 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 00:55:50.444000 audit[6609]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff80123200 a2=3 a3=0 items=0 ppid=1 pid=6609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:50.496239 kernel: audit: type=1300 audit(1757724950.444:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff80123200 a2=3 a3=0 items=0 ppid=1 pid=6609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:50.500219 kernel: audit: type=1327 audit(1757724950.444:489): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:50.444000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:50.458000 audit[6609]: USER_START pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.503239 kernel: audit: type=1105 audit(1757724950.458:490): pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.466000 audit[6612]: CRED_ACQ pid=6612 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.523233 kernel: audit: type=1103 audit(1757724950.466:491): pid=6612 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.918111 sshd[6609]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:50.918000 audit[6609]: USER_END pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.920866 systemd[1]: sshd@10-10.200.4.17:22-10.200.16.10:44450.service: Deactivated successfully. Sep 13 00:55:50.921789 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:55:50.923671 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:55:50.924696 systemd-logind[1540]: Removed session 13. Sep 13 00:55:50.918000 audit[6609]: CRED_DISP pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.940212 kernel: audit: type=1106 audit(1757724950.918:492): pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.940259 kernel: audit: type=1104 audit(1757724950.918:493): pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:50.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.17:22-10.200.16.10:44450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.015308 systemd[1]: Started sshd@11-10.200.4.17:22-10.200.16.10:41314.service. Sep 13 00:55:56.041055 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:56.041226 kernel: audit: type=1130 audit(1757724956.014:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.17:22-10.200.16.10:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.17:22-10.200.16.10:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.605000 audit[6651]: USER_ACCT pid=6651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.606566 sshd[6651]: Accepted publickey for core from 10.200.16.10 port 41314 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:56.625000 audit[6651]: CRED_ACQ pid=6651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.627088 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:56.632903 systemd[1]: Started session-14.scope. Sep 13 00:55:56.633993 systemd-logind[1540]: New session 14 of user core. Sep 13 00:55:56.647343 kernel: audit: type=1101 audit(1757724956.605:496): pid=6651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.647490 kernel: audit: type=1103 audit(1757724956.625:497): pid=6651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.659235 kernel: audit: type=1006 audit(1757724956.625:498): pid=6651 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:55:56.659358 kernel: audit: type=1300 audit(1757724956.625:498): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3ebafd30 a2=3 a3=0 items=0 ppid=1 pid=6651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:56.625000 audit[6651]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3ebafd30 a2=3 a3=0 items=0 ppid=1 pid=6651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:56.625000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:56.683926 kernel: audit: type=1327 audit(1757724956.625:498): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:56.684047 kernel: audit: type=1105 audit(1757724956.638:499): pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.638000 audit[6651]: USER_START pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.640000 audit[6653]: CRED_ACQ pid=6653 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:56.721385 kernel: audit: type=1103 audit(1757724956.640:500): pid=6653 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:57.095299 sshd[6651]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:57.095000 audit[6651]: USER_END pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:57.098442 systemd[1]: sshd@11-10.200.4.17:22-10.200.16.10:41314.service: Deactivated successfully. Sep 13 00:55:57.099454 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:55:57.105799 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:55:57.106822 systemd-logind[1540]: Removed session 14. Sep 13 00:55:57.119040 kernel: audit: type=1106 audit(1757724957.095:501): pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:57.119204 kernel: audit: type=1104 audit(1757724957.095:502): pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:57.095000 audit[6651]: CRED_DISP pid=6651 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:55:57.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.17:22-10.200.16.10:41314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.193258 systemd[1]: Started sshd@12-10.200.4.17:22-10.200.16.10:52262.service. Sep 13 00:56:02.217806 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:02.217931 kernel: audit: type=1130 audit(1757724962.192:504): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:52262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:52262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.790000 audit[6665]: USER_ACCT pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.812018 sshd[6665]: Accepted publickey for core from 10.200.16.10 port 52262 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:02.812393 kernel: audit: type=1101 audit(1757724962.790:505): pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.812537 sshd[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:02.810000 audit[6665]: CRED_ACQ pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.825761 systemd-logind[1540]: New session 15 of user core. Sep 13 00:56:02.826790 systemd[1]: Started session-15.scope. Sep 13 00:56:02.835545 kernel: audit: type=1103 audit(1757724962.810:506): pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.810000 audit[6665]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda3102cb0 a2=3 a3=0 items=0 ppid=1 pid=6665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.848204 kernel: audit: type=1006 audit(1757724962.810:507): pid=6665 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:56:02.848243 kernel: audit: type=1300 audit(1757724962.810:507): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda3102cb0 a2=3 a3=0 items=0 ppid=1 pid=6665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.810000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:02.868214 kernel: audit: type=1327 audit(1757724962.810:507): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:02.828000 audit[6665]: USER_START pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.894342 kernel: audit: type=1105 audit(1757724962.828:508): pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.828000 audit[6668]: CRED_ACQ pid=6668 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:02.895247 kernel: audit: type=1103 audit(1757724962.828:509): pid=6668 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.287406 sshd[6665]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:03.287000 audit[6665]: USER_END pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.291119 systemd[1]: sshd@12-10.200.4.17:22-10.200.16.10:52262.service: Deactivated successfully. Sep 13 00:56:03.292076 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:56:03.298736 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:56:03.299632 systemd-logind[1540]: Removed session 15. Sep 13 00:56:03.288000 audit[6665]: CRED_DISP pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.329421 kernel: audit: type=1106 audit(1757724963.287:510): pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.329555 kernel: audit: type=1104 audit(1757724963.288:511): pid=6665 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:52262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:03.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.17:22-10.200.16.10:52266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:03.388920 systemd[1]: Started sshd@13-10.200.4.17:22-10.200.16.10:52266.service. Sep 13 00:56:03.983000 audit[6679]: USER_ACCT pid=6679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.984587 sshd[6679]: Accepted publickey for core from 10.200.16.10 port 52266 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:03.984000 audit[6679]: CRED_ACQ pid=6679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:03.984000 audit[6679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbe87def0 a2=3 a3=0 items=0 ppid=1 pid=6679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:03.984000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:03.985868 sshd[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:03.996021 systemd[1]: Started session-16.scope. Sep 13 00:56:03.996589 systemd-logind[1540]: New session 16 of user core. Sep 13 00:56:04.006000 audit[6679]: USER_START pid=6679 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:04.009000 audit[6682]: CRED_ACQ pid=6682 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:04.623459 sshd[6679]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:04.623000 audit[6679]: USER_END pid=6679 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:04.623000 audit[6679]: CRED_DISP pid=6679 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:04.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.17:22-10.200.16.10:52266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:04.626256 systemd[1]: sshd@13-10.200.4.17:22-10.200.16.10:52266.service: Deactivated successfully. Sep 13 00:56:04.627290 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:56:04.629034 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:56:04.630396 systemd-logind[1540]: Removed session 16. Sep 13 00:56:04.721268 systemd[1]: Started sshd@14-10.200.4.17:22-10.200.16.10:52280.service. Sep 13 00:56:04.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.17:22-10.200.16.10:52280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:05.333000 audit[6690]: USER_ACCT pid=6690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.334692 sshd[6690]: Accepted publickey for core from 10.200.16.10 port 52280 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:05.334000 audit[6690]: CRED_ACQ pid=6690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.334000 audit[6690]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde0022270 a2=3 a3=0 items=0 ppid=1 pid=6690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:05.334000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:05.335988 sshd[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:05.340316 systemd-logind[1540]: New session 17 of user core. Sep 13 00:56:05.341329 systemd[1]: Started session-17.scope. Sep 13 00:56:05.345000 audit[6690]: USER_START pid=6690 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.347000 audit[6693]: CRED_ACQ pid=6693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.939347 sshd[6690]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:05.939000 audit[6690]: USER_END pid=6690 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.939000 audit[6690]: CRED_DISP pid=6690 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:05.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.17:22-10.200.16.10:52280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:05.942406 systemd[1]: sshd@14-10.200.4.17:22-10.200.16.10:52280.service: Deactivated successfully. Sep 13 00:56:05.943532 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:56:05.945689 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:56:05.947206 systemd-logind[1540]: Removed session 17. Sep 13 00:56:11.042515 systemd[1]: Started sshd@15-10.200.4.17:22-10.200.16.10:34140.service. Sep 13 00:56:11.068034 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:56:11.068145 kernel: audit: type=1130 audit(1757724971.042:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:34140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:11.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:34140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:11.654276 kernel: audit: type=1101 audit(1757724971.633:532): pid=6723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.633000 audit[6723]: USER_ACCT pid=6723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.635973 sshd[6723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:11.654865 sshd[6723]: Accepted publickey for core from 10.200.16.10 port 34140 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:11.634000 audit[6723]: CRED_ACQ pid=6723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.667489 systemd-logind[1540]: New session 18 of user core. Sep 13 00:56:11.668337 systemd[1]: Started session-18.scope. Sep 13 00:56:11.676206 kernel: audit: type=1103 audit(1757724971.634:533): pid=6723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.634000 audit[6723]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccaf49de0 a2=3 a3=0 items=0 ppid=1 pid=6723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:11.693217 kernel: audit: type=1006 audit(1757724971.634:534): pid=6723 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 13 00:56:11.693277 kernel: audit: type=1300 audit(1757724971.634:534): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccaf49de0 a2=3 a3=0 items=0 ppid=1 pid=6723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:11.634000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:11.727083 kernel: audit: type=1327 audit(1757724971.634:534): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:11.670000 audit[6723]: USER_START pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.676000 audit[6726]: CRED_ACQ pid=6726 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.770919 kernel: audit: type=1105 audit(1757724971.670:535): pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:11.771053 kernel: audit: type=1103 audit(1757724971.676:536): pid=6726 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:12.106440 sshd[6723]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:12.106000 audit[6723]: USER_END pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:12.109716 systemd[1]: sshd@15-10.200.4.17:22-10.200.16.10:34140.service: Deactivated successfully. Sep 13 00:56:12.110641 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:56:12.117071 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:56:12.118097 systemd-logind[1540]: Removed session 18. Sep 13 00:56:12.107000 audit[6723]: CRED_DISP pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:12.146049 kernel: audit: type=1106 audit(1757724972.106:537): pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:12.146170 kernel: audit: type=1104 audit(1757724972.107:538): pid=6723 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:12.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:34140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.17:22-10.200.16.10:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.204427 systemd[1]: Started sshd@16-10.200.4.17:22-10.200.16.10:34152.service. Sep 13 00:56:17.209730 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:17.209807 kernel: audit: type=1130 audit(1757724977.203:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.17:22-10.200.16.10:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.795000 audit[6782]: USER_ACCT pid=6782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.816227 kernel: audit: type=1101 audit(1757724977.795:541): pid=6782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.816280 sshd[6782]: Accepted publickey for core from 10.200.16.10 port 34152 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:17.816692 sshd[6782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:17.815000 audit[6782]: CRED_ACQ pid=6782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.822397 systemd[1]: Started session-19.scope. Sep 13 00:56:17.823550 systemd-logind[1540]: New session 19 of user core. Sep 13 00:56:17.837203 kernel: audit: type=1103 audit(1757724977.815:542): pid=6782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.854218 kernel: audit: type=1006 audit(1757724977.815:543): pid=6782 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 13 00:56:17.854307 kernel: audit: type=1300 audit(1757724977.815:543): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc887a96d0 a2=3 a3=0 items=0 ppid=1 pid=6782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:17.815000 audit[6782]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc887a96d0 a2=3 a3=0 items=0 ppid=1 pid=6782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:17.815000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:17.869207 kernel: audit: type=1327 audit(1757724977.815:543): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:17.837000 audit[6782]: USER_START pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.875218 kernel: audit: type=1105 audit(1757724977.837:544): pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.839000 audit[6785]: CRED_ACQ pid=6785 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:17.896210 kernel: audit: type=1103 audit(1757724977.839:545): pid=6785 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:18.108089 systemd[1]: run-containerd-runc-k8s.io-e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe-runc.vgl4Kj.mount: Deactivated successfully. Sep 13 00:56:18.324880 sshd[6782]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:18.324000 audit[6782]: USER_END pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:18.333081 systemd[1]: sshd@16-10.200.4.17:22-10.200.16.10:34152.service: Deactivated successfully. Sep 13 00:56:18.334928 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:56:18.335978 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:56:18.337065 systemd-logind[1540]: Removed session 19. Sep 13 00:56:18.324000 audit[6782]: CRED_DISP pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:18.347207 kernel: audit: type=1106 audit(1757724978.324:546): pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:18.347253 kernel: audit: type=1104 audit(1757724978.324:547): pid=6782 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:18.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.17:22-10.200.16.10:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:47300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.425222 systemd[1]: Started sshd@17-10.200.4.17:22-10.200.16.10:47300.service. Sep 13 00:56:23.431412 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:23.431628 kernel: audit: type=1130 audit(1757724983.424:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:47300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.024000 audit[6816]: USER_ACCT pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.046768 sshd[6816]: Accepted publickey for core from 10.200.16.10 port 47300 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:24.047211 kernel: audit: type=1101 audit(1757724984.024:550): pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.047226 sshd[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:24.046000 audit[6816]: CRED_ACQ pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.053274 systemd[1]: Started session-20.scope. Sep 13 00:56:24.054400 systemd-logind[1540]: New session 20 of user core. Sep 13 00:56:24.070290 kernel: audit: type=1103 audit(1757724984.046:551): pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.070392 kernel: audit: type=1006 audit(1757724984.046:552): pid=6816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Sep 13 00:56:24.046000 audit[6816]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1ccc9c60 a2=3 a3=0 items=0 ppid=1 pid=6816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:24.046000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:24.112309 kernel: audit: type=1300 audit(1757724984.046:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1ccc9c60 a2=3 a3=0 items=0 ppid=1 pid=6816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:24.112426 kernel: audit: type=1327 audit(1757724984.046:552): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:24.112464 kernel: audit: type=1105 audit(1757724984.059:553): pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.059000 audit[6816]: USER_START pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.068000 audit[6818]: CRED_ACQ pid=6818 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.153335 kernel: audit: type=1103 audit(1757724984.068:554): pid=6818 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.531101 sshd[6816]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:24.532000 audit[6816]: USER_END pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.534436 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:56:24.536086 systemd[1]: sshd@17-10.200.4.17:22-10.200.16.10:47300.service: Deactivated successfully. Sep 13 00:56:24.537013 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:56:24.538947 systemd-logind[1540]: Removed session 20. Sep 13 00:56:24.554578 kernel: audit: type=1106 audit(1757724984.532:555): pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.532000 audit[6816]: CRED_DISP pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.572218 kernel: audit: type=1104 audit(1757724984.532:556): pid=6816 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:24.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:47300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.632327 systemd[1]: Started sshd@18-10.200.4.17:22-10.200.16.10:47304.service. Sep 13 00:56:29.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.638243 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:29.638329 kernel: audit: type=1130 audit(1757724989.633:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.230000 audit[6849]: USER_ACCT pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.230962 sshd[6849]: Accepted publickey for core from 10.200.16.10 port 47304 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:30.253869 kernel: audit: type=1101 audit(1757724990.230:559): pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.253922 sshd[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:30.253000 audit[6849]: CRED_ACQ pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.262811 systemd[1]: Started session-21.scope. Sep 13 00:56:30.264074 systemd-logind[1540]: New session 21 of user core. Sep 13 00:56:30.277875 kernel: audit: type=1103 audit(1757724990.253:560): pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.277976 kernel: audit: type=1006 audit(1757724990.253:561): pid=6849 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 13 00:56:30.253000 audit[6849]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2c7f0160 a2=3 a3=0 items=0 ppid=1 pid=6849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:30.306475 kernel: audit: type=1300 audit(1757724990.253:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2c7f0160 a2=3 a3=0 items=0 ppid=1 pid=6849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:30.253000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:30.313265 kernel: audit: type=1327 audit(1757724990.253:561): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:30.276000 audit[6849]: USER_START pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.333952 kernel: audit: type=1105 audit(1757724990.276:562): pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.334058 kernel: audit: type=1103 audit(1757724990.280:563): pid=6852 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.280000 audit[6852]: CRED_ACQ pid=6852 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.729367 sshd[6849]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:30.730000 audit[6849]: USER_END pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.734450 systemd[1]: sshd@18-10.200.4.17:22-10.200.16.10:47304.service: Deactivated successfully. Sep 13 00:56:30.735400 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:56:30.736541 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:56:30.738203 systemd-logind[1540]: Removed session 21. Sep 13 00:56:30.730000 audit[6849]: CRED_DISP pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.769061 kernel: audit: type=1106 audit(1757724990.730:564): pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.769218 kernel: audit: type=1104 audit(1757724990.730:565): pid=6849 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:30.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.829276 systemd[1]: Started sshd@19-10.200.4.17:22-10.200.16.10:37008.service. Sep 13 00:56:30.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.17:22-10.200.16.10:37008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.428000 audit[6862]: USER_ACCT pid=6862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.428971 sshd[6862]: Accepted publickey for core from 10.200.16.10 port 37008 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:31.429000 audit[6862]: CRED_ACQ pid=6862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.429000 audit[6862]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd966ce060 a2=3 a3=0 items=0 ppid=1 pid=6862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:31.429000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:31.430537 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:31.435432 systemd-logind[1540]: New session 22 of user core. Sep 13 00:56:31.436274 systemd[1]: Started session-22.scope. Sep 13 00:56:31.441000 audit[6862]: USER_START pid=6862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.442000 audit[6865]: CRED_ACQ pid=6865 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.948640 sshd[6862]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:31.950000 audit[6862]: USER_END pid=6862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.950000 audit[6862]: CRED_DISP pid=6862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:31.951968 systemd[1]: sshd@19-10.200.4.17:22-10.200.16.10:37008.service: Deactivated successfully. Sep 13 00:56:31.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.17:22-10.200.16.10:37008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.956797 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:56:31.956838 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:56:31.958669 systemd-logind[1540]: Removed session 22. Sep 13 00:56:32.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:37020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:32.046391 systemd[1]: Started sshd@20-10.200.4.17:22-10.200.16.10:37020.service. Sep 13 00:56:32.649000 audit[6872]: USER_ACCT pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:32.649793 sshd[6872]: Accepted publickey for core from 10.200.16.10 port 37020 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:32.650000 audit[6872]: CRED_ACQ pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:32.650000 audit[6872]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd274149a0 a2=3 a3=0 items=0 ppid=1 pid=6872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:32.650000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:32.651174 sshd[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:32.656113 systemd[1]: Started session-23.scope. Sep 13 00:56:32.656580 systemd-logind[1540]: New session 23 of user core. Sep 13 00:56:32.662000 audit[6872]: USER_START pid=6872 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:32.664000 audit[6875]: CRED_ACQ pid=6875 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:34.609000 audit[6885]: NETFILTER_CFG table=filter:157 family=2 entries=20 op=nft_register_rule pid=6885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.609000 audit[6885]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc525a3cd0 a2=0 a3=7ffc525a3cbc items=0 ppid=2776 pid=6885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.614000 audit[6885]: NETFILTER_CFG table=nat:158 family=2 entries=26 op=nft_register_rule pid=6885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.614000 audit[6885]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc525a3cd0 a2=0 a3=0 items=0 ppid=2776 pid=6885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.638000 audit[6887]: NETFILTER_CFG table=filter:159 family=2 entries=32 op=nft_register_rule pid=6887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.643304 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 13 00:56:34.643394 kernel: audit: type=1325 audit(1757724994.638:584): table=filter:159 family=2 entries=32 op=nft_register_rule pid=6887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.638000 audit[6887]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fffc88537e0 a2=0 a3=7fffc88537cc items=0 ppid=2776 pid=6887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.675205 kernel: audit: type=1300 audit(1757724994.638:584): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fffc88537e0 a2=0 a3=7fffc88537cc items=0 ppid=2776 pid=6887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.686111 kernel: audit: type=1327 audit(1757724994.638:584): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.686000 audit[6887]: NETFILTER_CFG table=nat:160 family=2 entries=26 op=nft_register_rule pid=6887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.686000 audit[6887]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fffc88537e0 a2=0 a3=0 items=0 ppid=2776 pid=6887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.707402 sshd[6872]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:34.718802 kernel: audit: type=1325 audit(1757724994.686:585): table=nat:160 family=2 entries=26 op=nft_register_rule pid=6887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:34.718898 kernel: audit: type=1300 audit(1757724994.686:585): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fffc88537e0 a2=0 a3=0 items=0 ppid=2776 pid=6887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:34.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.720609 systemd[1]: sshd@20-10.200.4.17:22-10.200.16.10:37020.service: Deactivated successfully. Sep 13 00:56:34.722396 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:56:34.723005 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:56:34.724196 systemd-logind[1540]: Removed session 23. Sep 13 00:56:34.731206 kernel: audit: type=1327 audit(1757724994.686:585): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:34.717000 audit[6872]: USER_END pid=6872 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:34.752231 kernel: audit: type=1106 audit(1757724994.717:586): pid=6872 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:34.752335 kernel: audit: type=1104 audit(1757724994.717:587): pid=6872 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:34.717000 audit[6872]: CRED_DISP pid=6872 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:34.768463 kernel: audit: type=1131 audit(1757724994.720:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:37020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:34.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:37020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:34.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:37024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:34.816972 systemd[1]: Started sshd@21-10.200.4.17:22-10.200.16.10:37024.service. Sep 13 00:56:34.837318 kernel: audit: type=1130 audit(1757724994.817:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:37024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:35.416834 sshd[6890]: Accepted publickey for core from 10.200.16.10 port 37024 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:35.418163 sshd[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:35.416000 audit[6890]: USER_ACCT pid=6890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:35.417000 audit[6890]: CRED_ACQ pid=6890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:35.417000 audit[6890]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1dd8e8e0 a2=3 a3=0 items=0 ppid=1 pid=6890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:35.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:35.423820 systemd[1]: Started session-24.scope. Sep 13 00:56:35.425163 systemd-logind[1540]: New session 24 of user core. Sep 13 00:56:35.433000 audit[6890]: USER_START pid=6890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:35.435000 audit[6895]: CRED_ACQ pid=6895 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:36.292372 sshd[6890]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:36.292000 audit[6890]: USER_END pid=6890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:36.292000 audit[6890]: CRED_DISP pid=6890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:36.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:37024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:36.295264 systemd[1]: sshd@21-10.200.4.17:22-10.200.16.10:37024.service: Deactivated successfully. Sep 13 00:56:36.296317 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:56:36.298526 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:56:36.299926 systemd-logind[1540]: Removed session 24. Sep 13 00:56:36.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.17:22-10.200.16.10:37038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:36.389594 systemd[1]: Started sshd@22-10.200.4.17:22-10.200.16.10:37038.service. Sep 13 00:56:36.988936 sshd[6903]: Accepted publickey for core from 10.200.16.10 port 37038 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:36.987000 audit[6903]: USER_ACCT pid=6903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:36.989000 audit[6903]: CRED_ACQ pid=6903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:36.989000 audit[6903]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe630def60 a2=3 a3=0 items=0 ppid=1 pid=6903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:36.989000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:36.990986 sshd[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:36.997595 systemd[1]: Started session-25.scope. Sep 13 00:56:36.998117 systemd-logind[1540]: New session 25 of user core. Sep 13 00:56:37.009000 audit[6903]: USER_START pid=6903 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:37.011000 audit[6906]: CRED_ACQ pid=6906 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:37.540276 sshd[6903]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:37.540000 audit[6903]: USER_END pid=6903 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:37.540000 audit[6903]: CRED_DISP pid=6903 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:37.543720 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:56:37.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.17:22-10.200.16.10:37038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:37.544588 systemd[1]: sshd@22-10.200.4.17:22-10.200.16.10:37038.service: Deactivated successfully. Sep 13 00:56:37.545590 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:56:37.547155 systemd-logind[1540]: Removed session 25. Sep 13 00:56:42.051478 kernel: kauditd_printk_skb: 21 callbacks suppressed Sep 13 00:56:42.051624 kernel: audit: type=1325 audit(1757725002.031:607): table=filter:161 family=2 entries=20 op=nft_register_rule pid=6920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:42.031000 audit[6920]: NETFILTER_CFG table=filter:161 family=2 entries=20 op=nft_register_rule pid=6920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:42.031000 audit[6920]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff2be32350 a2=0 a3=7fff2be3233c items=0 ppid=2776 pid=6920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:42.073862 kernel: audit: type=1300 audit(1757725002.031:607): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff2be32350 a2=0 a3=7fff2be3233c items=0 ppid=2776 pid=6920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:42.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:42.085005 kernel: audit: type=1327 audit(1757725002.031:607): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:42.072000 audit[6920]: NETFILTER_CFG table=nat:162 family=2 entries=110 op=nft_register_chain pid=6920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:42.072000 audit[6920]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff2be32350 a2=0 a3=7fff2be3233c items=0 ppid=2776 pid=6920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:42.119010 kernel: audit: type=1325 audit(1757725002.072:608): table=nat:162 family=2 entries=110 op=nft_register_chain pid=6920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:42.119110 kernel: audit: type=1300 audit(1757725002.072:608): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff2be32350 a2=0 a3=7fff2be3233c items=0 ppid=2776 pid=6920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:42.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:42.130205 kernel: audit: type=1327 audit(1757725002.072:608): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:42.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:42.638015 systemd[1]: Started sshd@23-10.200.4.17:22-10.200.16.10:55014.service. Sep 13 00:56:42.659167 kernel: audit: type=1130 audit(1757725002.637:609): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:43.229000 audit[6922]: USER_ACCT pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.231322 sshd[6922]: Accepted publickey for core from 10.200.16.10 port 55014 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:43.251241 kernel: audit: type=1101 audit(1757725003.229:610): pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.250000 audit[6922]: CRED_ACQ pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.252775 sshd[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:43.262702 systemd[1]: Started session-26.scope. Sep 13 00:56:43.263790 systemd-logind[1540]: New session 26 of user core. Sep 13 00:56:43.272208 kernel: audit: type=1103 audit(1757725003.250:611): pid=6922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.250000 audit[6922]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff742bfcb0 a2=3 a3=0 items=0 ppid=1 pid=6922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:43.250000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:43.266000 audit[6922]: USER_START pid=6922 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.272000 audit[6924]: CRED_ACQ pid=6924 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.288212 kernel: audit: type=1006 audit(1757725003.250:612): pid=6922 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 00:56:43.712410 sshd[6922]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:43.712000 audit[6922]: USER_END pid=6922 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.712000 audit[6922]: CRED_DISP pid=6922 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:43.715465 systemd[1]: sshd@23-10.200.4.17:22-10.200.16.10:55014.service: Deactivated successfully. Sep 13 00:56:43.716505 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:56:43.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:55014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:43.717013 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:56:43.718311 systemd-logind[1540]: Removed session 26. Sep 13 00:56:44.009803 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.OeIH0A.mount: Deactivated successfully. Sep 13 00:56:48.104915 systemd[1]: run-containerd-runc-k8s.io-e076f83ad1912e097e3a2bef1cbb87e44e7602a49c36bb216e22d751ef66d2fe-runc.BNMGBC.mount: Deactivated successfully. Sep 13 00:56:48.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:48.825642 systemd[1]: Started sshd@24-10.200.4.17:22-10.200.16.10:55028.service. Sep 13 00:56:48.830897 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:56:48.830983 kernel: audit: type=1130 audit(1757725008.824:618): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:49.421000 audit[6997]: USER_ACCT pid=6997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.443476 sshd[6997]: Accepted publickey for core from 10.200.16.10 port 55028 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:49.443934 sshd[6997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:49.444209 kernel: audit: type=1101 audit(1757725009.421:619): pid=6997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.442000 audit[6997]: CRED_ACQ pid=6997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.449695 systemd[1]: Started session-27.scope. Sep 13 00:56:49.450871 systemd-logind[1540]: New session 27 of user core. Sep 13 00:56:49.477302 kernel: audit: type=1103 audit(1757725009.442:620): pid=6997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.477414 kernel: audit: type=1006 audit(1757725009.442:621): pid=6997 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 00:56:49.477452 kernel: audit: type=1300 audit(1757725009.442:621): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc79f540 a2=3 a3=0 items=0 ppid=1 pid=6997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:49.442000 audit[6997]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc79f540 a2=3 a3=0 items=0 ppid=1 pid=6997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:49.442000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:49.497208 kernel: audit: type=1327 audit(1757725009.442:621): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:49.456000 audit[6997]: USER_START pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.503202 kernel: audit: type=1105 audit(1757725009.456:622): pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.464000 audit[6999]: CRED_ACQ pid=6999 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.523349 kernel: audit: type=1103 audit(1757725009.464:623): pid=6999 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.920309 sshd[6997]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:49.920000 audit[6997]: USER_END pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.923260 systemd[1]: sshd@24-10.200.4.17:22-10.200.16.10:55028.service: Deactivated successfully. Sep 13 00:56:49.924170 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:56:49.931812 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:56:49.932792 systemd-logind[1540]: Removed session 27. Sep 13 00:56:49.920000 audit[6997]: CRED_DISP pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.959709 kernel: audit: type=1106 audit(1757725009.920:624): pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.959796 kernel: audit: type=1104 audit(1757725009.920:625): pid=6997 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:49.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:52.574415 systemd[1]: run-containerd-runc-k8s.io-5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108-runc.UrSpbj.mount: Deactivated successfully. Sep 13 00:56:55.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:48174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:55.019175 systemd[1]: Started sshd@25-10.200.4.17:22-10.200.16.10:48174.service. Sep 13 00:56:55.024790 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:55.024902 kernel: audit: type=1130 audit(1757725015.018:627): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:48174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:55.616000 audit[7029]: USER_ACCT pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.638146 sshd[7029]: Accepted publickey for core from 10.200.16.10 port 48174 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:56:55.638519 kernel: audit: type=1101 audit(1757725015.616:628): pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.638470 sshd[7029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:55.636000 audit[7029]: CRED_ACQ pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.643947 systemd[1]: Started session-28.scope. Sep 13 00:56:55.645047 systemd-logind[1540]: New session 28 of user core. Sep 13 00:56:55.671454 kernel: audit: type=1103 audit(1757725015.636:629): pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.671550 kernel: audit: type=1006 audit(1757725015.636:630): pid=7029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 13 00:56:55.636000 audit[7029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde3aefd60 a2=3 a3=0 items=0 ppid=1 pid=7029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:55.674251 kernel: audit: type=1300 audit(1757725015.636:630): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde3aefd60 a2=3 a3=0 items=0 ppid=1 pid=7029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:55.636000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:55.693302 kernel: audit: type=1327 audit(1757725015.636:630): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:55.649000 audit[7029]: USER_START pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.699201 kernel: audit: type=1105 audit(1757725015.649:631): pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.652000 audit[7031]: CRED_ACQ pid=7031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:55.736213 kernel: audit: type=1103 audit(1757725015.652:632): pid=7031 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:56.100241 sshd[7029]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:56.100000 audit[7029]: USER_END pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:56.103294 systemd[1]: sshd@25-10.200.4.17:22-10.200.16.10:48174.service: Deactivated successfully. Sep 13 00:56:56.104224 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:56:56.110927 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:56:56.111919 systemd-logind[1540]: Removed session 28. Sep 13 00:56:56.100000 audit[7029]: CRED_DISP pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:56.140750 kernel: audit: type=1106 audit(1757725016.100:633): pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:56.140874 kernel: audit: type=1104 audit(1757725016.100:634): pid=7029 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:56:56.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:48174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:01.200876 systemd[1]: Started sshd@26-10.200.4.17:22-10.200.16.10:45874.service. Sep 13 00:57:01.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.17:22-10.200.16.10:45874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:01.207277 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:57:01.207391 kernel: audit: type=1130 audit(1757725021.199:636): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.17:22-10.200.16.10:45874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:01.814540 kernel: audit: type=1101 audit(1757725021.792:637): pid=7044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.792000 audit[7044]: USER_ACCT pid=7044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.814797 sshd[7044]: Accepted publickey for core from 10.200.16.10 port 45874 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:57:01.814870 sshd[7044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:57:01.813000 audit[7044]: CRED_ACQ pid=7044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.823861 systemd[1]: Started session-29.scope. Sep 13 00:57:01.824922 systemd-logind[1540]: New session 29 of user core. Sep 13 00:57:01.851440 kernel: audit: type=1103 audit(1757725021.813:638): pid=7044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.851555 kernel: audit: type=1006 audit(1757725021.813:639): pid=7044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Sep 13 00:57:01.813000 audit[7044]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4ea4def0 a2=3 a3=0 items=0 ppid=1 pid=7044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:01.852218 kernel: audit: type=1300 audit(1757725021.813:639): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4ea4def0 a2=3 a3=0 items=0 ppid=1 pid=7044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:01.813000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:01.829000 audit[7044]: USER_START pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.896458 kernel: audit: type=1327 audit(1757725021.813:639): proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:01.896557 kernel: audit: type=1105 audit(1757725021.829:640): pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.834000 audit[7046]: CRED_ACQ pid=7046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:01.897215 kernel: audit: type=1103 audit(1757725021.834:641): pid=7046 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:02.273610 sshd[7044]: pam_unix(sshd:session): session closed for user core Sep 13 00:57:02.274000 audit[7044]: USER_END pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:02.277428 systemd[1]: sshd@26-10.200.4.17:22-10.200.16.10:45874.service: Deactivated successfully. Sep 13 00:57:02.278332 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:57:02.284579 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:57:02.285634 systemd-logind[1540]: Removed session 29. Sep 13 00:57:02.274000 audit[7044]: CRED_DISP pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:02.313796 kernel: audit: type=1106 audit(1757725022.274:642): pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:02.313878 kernel: audit: type=1104 audit(1757725022.274:643): pid=7044 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:02.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.17:22-10.200.16.10:45874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:07.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.17:22-10.200.16.10:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:07.371080 systemd[1]: Started sshd@27-10.200.4.17:22-10.200.16.10:45888.service. Sep 13 00:57:07.376979 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:57:07.377056 kernel: audit: type=1130 audit(1757725027.370:645): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.17:22-10.200.16.10:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:07.962000 audit[7057]: USER_ACCT pid=7057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:07.964976 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:57:07.984205 kernel: audit: type=1101 audit(1757725027.962:646): pid=7057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:07.984252 sshd[7057]: Accepted publickey for core from 10.200.16.10 port 45888 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:57:07.962000 audit[7057]: CRED_ACQ pid=7057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.008228 kernel: audit: type=1103 audit(1757725027.962:647): pid=7057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.010181 systemd-logind[1540]: New session 30 of user core. Sep 13 00:57:08.011959 systemd[1]: Started session-30.scope. Sep 13 00:57:08.038215 kernel: audit: type=1006 audit(1757725027.962:648): pid=7057 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Sep 13 00:57:07.962000 audit[7057]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea5ca23b0 a2=3 a3=0 items=0 ppid=1 pid=7057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:07.962000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:08.068377 kernel: audit: type=1300 audit(1757725027.962:648): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea5ca23b0 a2=3 a3=0 items=0 ppid=1 pid=7057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:08.068497 kernel: audit: type=1327 audit(1757725027.962:648): proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:08.015000 audit[7057]: USER_START pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.090685 kernel: audit: type=1105 audit(1757725028.015:649): pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.017000 audit[7060]: CRED_ACQ pid=7060 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.111209 kernel: audit: type=1103 audit(1757725028.017:650): pid=7060 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.497481 sshd[7057]: pam_unix(sshd:session): session closed for user core Sep 13 00:57:08.497000 audit[7057]: USER_END pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.523206 kernel: audit: type=1106 audit(1757725028.497:651): pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.524900 systemd-logind[1540]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:57:08.525345 systemd[1]: sshd@27-10.200.4.17:22-10.200.16.10:45888.service: Deactivated successfully. Sep 13 00:57:08.526333 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:57:08.528162 systemd-logind[1540]: Removed session 30. Sep 13 00:57:08.521000 audit[7057]: CRED_DISP pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.549211 kernel: audit: type=1104 audit(1757725028.521:652): pid=7057 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:08.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.17:22-10.200.16.10:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.17:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.595666 systemd[1]: Started sshd@28-10.200.4.17:22-10.200.16.10:47032.service. Sep 13 00:57:13.601124 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:57:13.601247 kernel: audit: type=1130 audit(1757725033.594:654): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.17:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.983017 systemd[1]: run-containerd-runc-k8s.io-5ae9b005ed0a0c331e39266c551ee52c53930b99d68d13b128d8850e9f501108-runc.zkIGFX.mount: Deactivated successfully. Sep 13 00:57:14.013194 systemd[1]: run-containerd-runc-k8s.io-cc88cc0abb9f79bb2371d5b136aea243329cd12e3c7b0bbd3bac4c856d731106-runc.XaxP7a.mount: Deactivated successfully. Sep 13 00:57:14.194000 audit[7075]: USER_ACCT pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.216179 sshd[7075]: Accepted publickey for core from 10.200.16.10 port 47032 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:57:14.216543 kernel: audit: type=1101 audit(1757725034.194:655): pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.216675 sshd[7075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:57:14.215000 audit[7075]: CRED_ACQ pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.231740 systemd[1]: Started session-31.scope. Sep 13 00:57:14.233234 systemd-logind[1540]: New session 31 of user core. Sep 13 00:57:14.238222 kernel: audit: type=1103 audit(1757725034.215:656): pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.215000 audit[7075]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde18dac30 a2=3 a3=0 items=0 ppid=1 pid=7075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:14.252284 kernel: audit: type=1006 audit(1757725034.215:657): pid=7075 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Sep 13 00:57:14.252338 kernel: audit: type=1300 audit(1757725034.215:657): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde18dac30 a2=3 a3=0 items=0 ppid=1 pid=7075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:14.215000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:14.271269 kernel: audit: type=1327 audit(1757725034.215:657): proctitle=737368643A20636F7265205B707269765D Sep 13 00:57:14.237000 audit[7075]: USER_START pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.297278 kernel: audit: type=1105 audit(1757725034.237:658): pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.297406 kernel: audit: type=1103 audit(1757725034.238:659): pid=7120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.238000 audit[7120]: CRED_ACQ pid=7120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.679804 sshd[7075]: pam_unix(sshd:session): session closed for user core Sep 13 00:57:14.679000 audit[7075]: USER_END pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.683005 systemd-logind[1540]: Session 31 logged out. Waiting for processes to exit. Sep 13 00:57:14.684622 systemd[1]: sshd@28-10.200.4.17:22-10.200.16.10:47032.service: Deactivated successfully. Sep 13 00:57:14.685504 systemd[1]: session-31.scope: Deactivated successfully. Sep 13 00:57:14.687305 systemd-logind[1540]: Removed session 31. Sep 13 00:57:14.680000 audit[7075]: CRED_DISP pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.719226 kernel: audit: type=1106 audit(1757725034.679:660): pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.719374 kernel: audit: type=1104 audit(1757725034.680:661): pid=7075 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 00:57:14.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.17:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'