Dec 16 13:03:12.684702 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 13:03:12.684731 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:12.684749 kernel: BIOS-provided physical RAM map: Dec 16 13:03:12.684757 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:03:12.684764 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:03:12.684771 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:03:12.684779 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:03:12.684787 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:03:12.684794 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:03:12.684803 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:03:12.684811 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:03:12.684818 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:03:12.684825 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:03:12.684832 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:03:12.684843 kernel: NX (Execute Disable) protection: active Dec 16 13:03:12.684851 kernel: APIC: Static calls initialized Dec 16 13:03:12.684858 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:03:12.684866 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9ac698 RNG=0x3ffd2018 Dec 16 13:03:12.684874 kernel: random: crng init done Dec 16 13:03:12.684882 kernel: secureboot: Secure boot disabled Dec 16 13:03:12.684890 kernel: SMBIOS 3.1.0 present. Dec 16 13:03:12.684897 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:03:12.684905 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:03:12.684912 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:03:12.684922 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:03:12.684930 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:03:12.684939 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:03:12.684947 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:03:12.684955 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:12.684963 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:12.684997 kernel: tsc: Detected 2300.000 MHz processor Dec 16 13:03:12.685006 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:03:12.685016 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:03:12.685025 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:03:12.685035 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:03:12.685045 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:03:12.685054 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:03:12.685062 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:03:12.685071 kernel: Using GB pages for direct mapping Dec 16 13:03:12.685079 kernel: ACPI: Early table checksum verification disabled Dec 16 13:03:12.685093 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:03:12.685101 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685110 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685118 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:03:12.685126 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:03:12.685134 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685143 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685150 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685174 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:12.685182 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:12.685191 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.685198 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:03:12.685208 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:03:12.685216 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:03:12.685224 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:03:12.685232 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:03:12.685241 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:03:12.685249 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:03:12.685258 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:03:12.685269 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:03:12.685277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:03:12.685287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:03:12.685295 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:03:12.685304 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:03:12.685312 kernel: Zone ranges: Dec 16 13:03:12.685321 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:03:12.685332 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:03:12.685408 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:12.685416 kernel: Device empty Dec 16 13:03:12.685425 kernel: Movable zone start for each node Dec 16 13:03:12.685433 kernel: Early memory node ranges Dec 16 13:03:12.685442 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:03:12.685451 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:03:12.685461 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:03:12.685470 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:03:12.685478 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:12.685486 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:03:12.685496 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:03:12.685505 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:03:12.685514 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:03:12.685524 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:03:12.685533 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:03:12.685542 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:03:12.685550 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:03:12.685559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:03:12.685568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:03:12.685576 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:03:12.685587 kernel: TSC deadline timer available Dec 16 13:03:12.685597 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:03:12.685606 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:03:12.685615 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:03:12.685623 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:03:12.685632 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:03:12.685640 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:03:12.685649 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:03:12.685659 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:03:12.685668 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:03:12.685678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:03:12.685687 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:03:12.685696 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:03:12.685705 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:03:12.685714 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:03:12.685724 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:03:12.685733 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:03:12.685743 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:12.685753 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:03:12.685762 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:03:12.685771 kernel: Fallback order for Node 0: 0 Dec 16 13:03:12.685783 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:03:12.685791 kernel: Policy zone: Normal Dec 16 13:03:12.685799 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:03:12.685806 kernel: software IO TLB: area num 2. Dec 16 13:03:12.685813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:03:12.685821 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:03:12.685828 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:03:12.685838 kernel: Dynamic Preempt: voluntary Dec 16 13:03:12.685845 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:03:12.685854 kernel: rcu: RCU event tracing is enabled. Dec 16 13:03:12.685871 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:03:12.685881 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:03:12.685889 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:03:12.685898 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:03:12.685906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:03:12.685914 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:03:12.685922 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.685933 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.685941 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.685950 kernel: Using NULL legacy PIC Dec 16 13:03:12.685961 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:03:12.685969 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:03:12.685977 kernel: Console: colour dummy device 80x25 Dec 16 13:03:12.685986 kernel: printk: legacy console [tty1] enabled Dec 16 13:03:12.685994 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:03:12.686002 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:03:12.686010 kernel: ACPI: Core revision 20240827 Dec 16 13:03:12.686018 kernel: Failed to register legacy timer interrupt Dec 16 13:03:12.686029 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:03:12.686038 kernel: x2apic enabled Dec 16 13:03:12.686046 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:03:12.686055 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:03:12.686063 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:03:12.686071 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:03:12.686079 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:03:12.686089 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:03:12.686097 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:03:12.686106 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:03:12.686115 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:03:12.686124 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:03:12.686133 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:03:12.686141 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:12.686152 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Dec 16 13:03:12.686172 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:03:12.686181 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:03:12.686190 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:03:12.686198 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:03:12.686206 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:03:12.686214 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:03:12.686222 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:03:12.686232 kernel: RETBleed: Vulnerable Dec 16 13:03:12.686239 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:03:12.686247 kernel: active return thunk: its_return_thunk Dec 16 13:03:12.686255 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:03:12.686263 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:03:12.686271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:03:12.686279 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:03:12.686287 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:03:12.686295 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:03:12.686303 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:03:12.686312 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:03:12.686321 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:03:12.686329 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:03:12.686337 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:03:12.686345 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:03:12.686352 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:03:12.686360 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:03:12.686367 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:03:12.686375 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:03:12.686383 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:03:12.686392 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:03:12.686402 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:03:12.686410 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:03:12.686418 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:03:12.686426 kernel: landlock: Up and running. Dec 16 13:03:12.686433 kernel: SELinux: Initializing. Dec 16 13:03:12.686441 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.686449 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.686457 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:03:12.686464 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:03:12.686473 kernel: signal: max sigframe size: 11952 Dec 16 13:03:12.686483 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:03:12.686491 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:03:12.686500 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:03:12.686508 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:03:12.686517 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:03:12.686526 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:03:12.686534 kernel: .... node #0, CPUs: #1 Dec 16 13:03:12.686544 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:03:12.686552 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Dec 16 13:03:12.686561 kernel: Memory: 8095536K/8383228K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 281556K reserved, 0K cma-reserved) Dec 16 13:03:12.686569 kernel: devtmpfs: initialized Dec 16 13:03:12.686578 kernel: x86/mm: Memory block size: 128MB Dec 16 13:03:12.686586 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:03:12.686595 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:03:12.686605 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:03:12.686613 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:03:12.686622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:03:12.686630 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:03:12.686638 kernel: audit: type=2000 audit(1765890187.069:1): state=initialized audit_enabled=0 res=1 Dec 16 13:03:12.686646 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:03:12.686654 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:03:12.686664 kernel: cpuidle: using governor menu Dec 16 13:03:12.686672 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:03:12.686681 kernel: dca service started, version 1.12.1 Dec 16 13:03:12.686690 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:03:12.686698 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:03:12.686706 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:03:12.686714 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:03:12.686724 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:03:12.686732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:03:12.686740 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:03:12.686749 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:03:12.686758 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:03:12.686767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:03:12.686775 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:03:12.686785 kernel: ACPI: Interpreter enabled Dec 16 13:03:12.686793 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:03:12.686800 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:03:12.686809 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:03:12.686817 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:03:12.686825 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:03:12.686834 kernel: iommu: Default domain type: Translated Dec 16 13:03:12.686843 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:03:12.686853 kernel: efivars: Registered efivars operations Dec 16 13:03:12.686861 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:03:12.686869 kernel: PCI: System does not support PCI Dec 16 13:03:12.686877 kernel: vgaarb: loaded Dec 16 13:03:12.686885 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:03:12.686893 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:03:12.686901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:03:12.686912 kernel: pnp: PnP ACPI init Dec 16 13:03:12.686921 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:03:12.686930 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:03:12.686938 kernel: NET: Registered PF_INET protocol family Dec 16 13:03:12.686946 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:03:12.686954 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:03:12.686974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:03:12.686983 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:03:12.686992 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:03:12.687001 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:03:12.687010 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.687019 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.687027 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:03:12.687037 kernel: NET: Registered PF_XDP protocol family Dec 16 13:03:12.687045 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:03:12.687050 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:03:12.687055 kernel: software IO TLB: mapped [mem 0x000000003a9ac000-0x000000003e9ac000] (64MB) Dec 16 13:03:12.687060 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:03:12.687065 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:03:12.687071 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:12.687076 kernel: clocksource: Switched to clocksource tsc Dec 16 13:03:12.687083 kernel: Initialise system trusted keyrings Dec 16 13:03:12.687088 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:03:12.687094 kernel: Key type asymmetric registered Dec 16 13:03:12.687099 kernel: Asymmetric key parser 'x509' registered Dec 16 13:03:12.687104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:03:12.687110 kernel: io scheduler mq-deadline registered Dec 16 13:03:12.687115 kernel: io scheduler kyber registered Dec 16 13:03:12.687122 kernel: io scheduler bfq registered Dec 16 13:03:12.687127 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:03:12.687133 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:03:12.687138 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:12.687143 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:03:12.687149 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:12.687165 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:03:12.687316 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:03:12.687383 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:03:09 UTC (1765890189) Dec 16 13:03:12.687445 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:03:12.687452 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:03:12.687457 kernel: efifb: probing for efifb Dec 16 13:03:12.687463 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:03:12.687470 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:03:12.687476 kernel: efifb: scrolling: redraw Dec 16 13:03:12.687481 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:03:12.687486 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:03:12.687492 kernel: fb0: EFI VGA frame buffer device Dec 16 13:03:12.687497 kernel: pstore: Using crash dump compression: deflate Dec 16 13:03:12.687502 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:03:12.687509 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:03:12.687514 kernel: Segment Routing with IPv6 Dec 16 13:03:12.687519 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:03:12.687524 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:03:12.687529 kernel: Key type dns_resolver registered Dec 16 13:03:12.687534 kernel: IPI shorthand broadcast: enabled Dec 16 13:03:12.687540 kernel: sched_clock: Marking stable (2039003952, 97269002)->(2447009756, -310736802) Dec 16 13:03:12.687545 kernel: registered taskstats version 1 Dec 16 13:03:12.687552 kernel: Loading compiled-in X.509 certificates Dec 16 13:03:12.687557 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 13:03:12.687562 kernel: Demotion targets for Node 0: null Dec 16 13:03:12.687567 kernel: Key type .fscrypt registered Dec 16 13:03:12.687572 kernel: Key type fscrypt-provisioning registered Dec 16 13:03:12.687578 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:03:12.687584 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:03:12.687590 kernel: ima: No architecture policies found Dec 16 13:03:12.687596 kernel: clk: Disabling unused clocks Dec 16 13:03:12.687601 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 13:03:12.687607 kernel: Write protecting the kernel read-only data: 45056k Dec 16 13:03:12.687612 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 13:03:12.687617 kernel: Run /init as init process Dec 16 13:03:12.687622 kernel: with arguments: Dec 16 13:03:12.687629 kernel: /init Dec 16 13:03:12.687634 kernel: with environment: Dec 16 13:03:12.687639 kernel: HOME=/ Dec 16 13:03:12.687645 kernel: TERM=linux Dec 16 13:03:12.687650 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:03:12.687655 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:03:12.687661 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:03:12.687667 kernel: PTP clock support registered Dec 16 13:03:12.687672 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:03:12.687677 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:03:12.687683 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:03:12.687688 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:03:12.687693 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:03:12.687698 kernel: SCSI subsystem initialized Dec 16 13:03:12.687704 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:03:12.687806 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:12.687875 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:03:12.687960 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:03:12.688029 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:12.688123 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:03:12.688233 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:03:12.688336 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:12.688443 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:03:12.688452 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:03:12.688563 kernel: scsi host0: storvsc_host_t Dec 16 13:03:12.688906 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:03:12.688923 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:03:12.688932 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:03:12.688940 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 13:03:12.689059 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:03:12.689071 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:03:12.689084 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 13:03:12.689207 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:03:12.689327 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:12.689405 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:03:12.689417 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:03:12.689525 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:03:12.689535 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:03:12.689638 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:03:12.689649 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:03:12.689658 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:03:12.689666 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:03:12.689675 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:03:12.689697 kernel: raid6: avx512x4 gen() 33757 MB/s Dec 16 13:03:12.689708 kernel: raid6: avx512x2 gen() 32995 MB/s Dec 16 13:03:12.689717 kernel: raid6: avx512x1 gen() 28565 MB/s Dec 16 13:03:12.689726 kernel: raid6: avx2x4 gen() 28980 MB/s Dec 16 13:03:12.689735 kernel: raid6: avx2x2 gen() 30062 MB/s Dec 16 13:03:12.689743 kernel: raid6: avx2x1 gen() 20444 MB/s Dec 16 13:03:12.689751 kernel: raid6: using algorithm avx512x4 gen() 33757 MB/s Dec 16 13:03:12.689761 kernel: raid6: .... xor() 5427 MB/s, rmw enabled Dec 16 13:03:12.689770 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:03:12.689778 kernel: xor: automatically using best checksumming function avx Dec 16 13:03:12.689787 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:03:12.689796 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (921) Dec 16 13:03:12.689807 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 13:03:12.689816 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:12.689827 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:03:12.689835 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:03:12.689843 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:03:12.689851 kernel: loop: module loaded Dec 16 13:03:12.689859 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 13:03:12.689868 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:03:12.689879 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:03:12.689893 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:12.689903 systemd[1]: Detected virtualization microsoft. Dec 16 13:03:12.689911 systemd[1]: Detected architecture x86-64. Dec 16 13:03:12.689920 systemd[1]: Running in initrd. Dec 16 13:03:12.689930 systemd[1]: No hostname configured, using default hostname. Dec 16 13:03:12.689939 systemd[1]: Hostname set to . Dec 16 13:03:12.689950 systemd[1]: Initializing machine ID from random generator. Dec 16 13:03:12.689959 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:03:12.689969 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:12.689978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:12.689987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:12.689997 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:03:12.690008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:12.690017 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:03:12.690027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:03:12.690038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:12.690048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:12.690059 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:12.690069 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:12.690079 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:12.690088 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:12.690097 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:12.690110 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:12.690121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:12.690131 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:12.690141 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:03:12.690151 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:03:12.690175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:12.690185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:12.690197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:12.690208 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:12.690218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:03:12.690229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:03:12.690238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:12.690247 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:03:12.690257 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:03:12.690269 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:03:12.690279 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:12.690289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:12.690299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:12.690327 systemd-journald[1055]: Collecting audit messages is enabled. Dec 16 13:03:12.690351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:12.690364 kernel: audit: type=1130 audit(1765890192.687:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.690374 systemd-journald[1055]: Journal started Dec 16 13:03:12.690400 systemd-journald[1055]: Runtime Journal (/run/log/journal/12312461c8604618b4a064032a2f266d) is 8M, max 158.5M, 150.5M free. Dec 16 13:03:12.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.692574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:12.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.700263 kernel: audit: type=1130 audit(1765890192.696:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.700343 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:12.714278 kernel: audit: type=1130 audit(1765890192.703:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.705149 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:03:12.708648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:12.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.719613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:12.724877 kernel: audit: type=1130 audit(1765890192.706:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.779559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:03:12.794917 systemd-modules-load[1060]: Inserted module 'br_netfilter' Dec 16 13:03:12.795377 kernel: Bridge firewalling registered Dec 16 13:03:12.796844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:12.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.804653 kernel: audit: type=1130 audit(1765890192.795:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.803328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:12.834896 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:03:12.837017 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:12.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.844897 kernel: audit: type=1130 audit(1765890192.835:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.844966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:12.876886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:12.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.881479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:12.885284 kernel: audit: type=1130 audit(1765890192.876:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.885304 kernel: audit: type=1130 audit(1765890192.880:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.885380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:12.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.897955 kernel: audit: type=1130 audit(1765890192.884:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.891000 audit: BPF prog-id=6 op=LOAD Dec 16 13:03:12.893104 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:12.903063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:12.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.908274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:03:13.016243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:13.021806 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 13:03:13.021932 kernel: audit: type=1130 audit(1765890193.018:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.046555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:03:13.054438 systemd-resolved[1083]: Positive Trust Anchors: Dec 16 13:03:13.054450 systemd-resolved[1083]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:13.054454 systemd-resolved[1083]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:03:13.054486 systemd-resolved[1083]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:13.093800 dracut-cmdline[1098]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:13.103086 systemd-resolved[1083]: Defaulting to hostname 'linux'. Dec 16 13:03:13.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.112167 kernel: audit: type=1130 audit(1765890193.105:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.103893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:13.107535 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:13.217185 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:03:13.283180 kernel: iscsi: registered transport (tcp) Dec 16 13:03:13.349177 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:03:13.349240 kernel: QLogic iSCSI HBA Driver Dec 16 13:03:13.407904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:13.425756 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:13.434181 kernel: audit: type=1130 audit(1765890193.426:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.433992 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:13.466842 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:13.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.471855 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:03:13.476282 kernel: audit: type=1130 audit(1765890193.467:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.484259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:03:13.505515 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:13.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.513515 kernel: audit: type=1130 audit(1765890193.506:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.514277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:13.519107 kernel: audit: type=1334 audit(1765890193.506:18): prog-id=7 op=LOAD Dec 16 13:03:13.519138 kernel: audit: type=1334 audit(1765890193.506:19): prog-id=8 op=LOAD Dec 16 13:03:13.506000 audit: BPF prog-id=7 op=LOAD Dec 16 13:03:13.506000 audit: BPF prog-id=8 op=LOAD Dec 16 13:03:13.546719 systemd-udevd[1318]: Using default interface naming scheme 'v257'. Dec 16 13:03:13.559613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:13.570337 kernel: audit: type=1130 audit(1765890193.559:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.568269 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:03:13.593985 dracut-pre-trigger[1393]: rd.md=0: removing MD RAID activation Dec 16 13:03:13.604492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:13.615921 kernel: audit: type=1130 audit(1765890193.607:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.615948 kernel: audit: type=1334 audit(1765890193.610:22): prog-id=9 op=LOAD Dec 16 13:03:13.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.610000 audit: BPF prog-id=9 op=LOAD Dec 16 13:03:13.616269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:13.628729 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:13.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.636545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:13.661860 systemd-networkd[1461]: lo: Link UP Dec 16 13:03:13.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.661869 systemd-networkd[1461]: lo: Gained carrier Dec 16 13:03:13.662251 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:13.662874 systemd[1]: Reached target network.target - Network. Dec 16 13:03:13.690044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:13.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.696466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:03:13.767038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:13.767139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:13.777252 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:03:13.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.770794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:13.791957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:13.795247 kernel: hv_netvsc f8615163-0000-1000-2000-002248402cb2 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:03:13.814038 systemd-networkd[1461]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:13.814054 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:13.815239 systemd-networkd[1461]: eth0: Link UP Dec 16 13:03:13.815637 systemd-networkd[1461]: eth0: Gained carrier Dec 16 13:03:13.815648 systemd-networkd[1461]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:13.829406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:13.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.835194 systemd-networkd[1461]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:13.855180 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:03:13.855221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#291 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:03:13.889188 kernel: AES CTR mode by8 optimization enabled Dec 16 13:03:14.047197 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:03:14.182533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:03:14.194269 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:03:14.273768 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:03:14.288366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:03:14.316265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:03:14.414635 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:14.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:14.415242 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:14.421220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:14.421442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:14.430337 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:03:14.476797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:14.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:14.812602 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:14.812931 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:03:14.815585 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:03:14.817218 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:14.822356 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:03:14.826209 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:03:14.831362 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:03:14.834168 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:03:14.848598 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:14.848816 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:03:14.852251 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:03:14.874704 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:14.884168 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:03:14.887416 kernel: hv_netvsc f8615163-0000-1000-2000-002248402cb2 eth0: VF registering: eth1 Dec 16 13:03:14.887604 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:03:14.892185 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:03:14.892296 systemd-networkd[1461]: eth1: Interface name change detected, renamed to enP30832s1. Dec 16 13:03:14.991194 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:03:14.994173 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:14.995830 systemd-networkd[1461]: enP30832s1: Link UP Dec 16 13:03:14.999227 kernel: hv_netvsc f8615163-0000-1000-2000-002248402cb2 eth0: Data path switched to VF: enP30832s1 Dec 16 13:03:14.996867 systemd-networkd[1461]: enP30832s1: Gained carrier Dec 16 13:03:15.004222 systemd-networkd[1461]: eth0: Gained IPv6LL Dec 16 13:03:15.479080 disk-uuid[1619]: Warning: The kernel is still using the old partition table. Dec 16 13:03:15.479080 disk-uuid[1619]: The new table will be used at the next reboot or after you Dec 16 13:03:15.479080 disk-uuid[1619]: run partprobe(8) or kpartx(8) Dec 16 13:03:15.479080 disk-uuid[1619]: The operation has completed successfully. Dec 16 13:03:15.488124 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:03:15.488262 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:03:15.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.498200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:03:15.547174 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1665) Dec 16 13:03:15.547217 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:15.549911 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:15.572711 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:15.572765 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:15.573829 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:15.580175 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:15.580793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:03:15.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.584755 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:03:16.831743 ignition[1684]: Ignition 2.22.0 Dec 16 13:03:16.831758 ignition[1684]: Stage: fetch-offline Dec 16 13:03:16.831985 ignition[1684]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.835818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:16.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.833123 ignition[1684]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.841341 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:03:16.833986 ignition[1684]: parsed url from cmdline: "" Dec 16 13:03:16.833990 ignition[1684]: no config URL provided Dec 16 13:03:16.833996 ignition[1684]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:16.834010 ignition[1684]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:16.834014 ignition[1684]: failed to fetch config: resource requires networking Dec 16 13:03:16.834339 ignition[1684]: Ignition finished successfully Dec 16 13:03:16.870078 ignition[1690]: Ignition 2.22.0 Dec 16 13:03:16.870090 ignition[1690]: Stage: fetch Dec 16 13:03:16.870779 ignition[1690]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.870792 ignition[1690]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.870914 ignition[1690]: parsed url from cmdline: "" Dec 16 13:03:16.870918 ignition[1690]: no config URL provided Dec 16 13:03:16.870923 ignition[1690]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:16.870930 ignition[1690]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:16.870957 ignition[1690]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:03:17.166947 ignition[1690]: GET result: OK Dec 16 13:03:17.167071 ignition[1690]: config has been read from IMDS userdata Dec 16 13:03:17.167099 ignition[1690]: parsing config with SHA512: 640b96f9d62ea2be1a0445128ce655d20d2201ec9e0e0d48aee983a12449b5da7cf185cd7321c2026fb2be02a51be51239fa2544ea98883af4dd5e1dff74be3e Dec 16 13:03:17.174635 unknown[1690]: fetched base config from "system" Dec 16 13:03:17.174645 unknown[1690]: fetched base config from "system" Dec 16 13:03:17.175712 ignition[1690]: fetch: fetch complete Dec 16 13:03:17.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:17.174651 unknown[1690]: fetched user config from "azure" Dec 16 13:03:17.175717 ignition[1690]: fetch: fetch passed Dec 16 13:03:17.177813 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:03:17.175758 ignition[1690]: Ignition finished successfully Dec 16 13:03:17.187196 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:03:17.214546 ignition[1696]: Ignition 2.22.0 Dec 16 13:03:17.214558 ignition[1696]: Stage: kargs Dec 16 13:03:17.214823 ignition[1696]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:17.217938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:03:17.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:17.214832 ignition[1696]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:17.222232 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:03:17.215815 ignition[1696]: kargs: kargs passed Dec 16 13:03:17.215860 ignition[1696]: Ignition finished successfully Dec 16 13:03:17.246948 ignition[1702]: Ignition 2.22.0 Dec 16 13:03:17.246958 ignition[1702]: Stage: disks Dec 16 13:03:17.247206 ignition[1702]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:17.249668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:03:17.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:17.247214 ignition[1702]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:17.254956 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:17.248084 ignition[1702]: disks: disks passed Dec 16 13:03:17.257092 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:03:17.248121 ignition[1702]: Ignition finished successfully Dec 16 13:03:17.264916 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:17.268116 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:17.273197 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:17.274927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:03:17.397795 systemd-fsck[1710]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 13:03:17.403051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:03:17.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:17.407712 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:03:17.757172 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 13:03:17.757624 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:03:17.760267 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:17.797117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:17.800304 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:03:17.806286 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:03:17.812360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:03:17.816737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:17.826248 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1719) Dec 16 13:03:17.826283 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:17.826296 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:17.826467 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:03:17.829704 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:03:17.833555 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:17.833583 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:17.833598 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:17.838207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:18.379828 coreos-metadata[1721]: Dec 16 13:03:18.379 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:03:18.384430 coreos-metadata[1721]: Dec 16 13:03:18.384 INFO Fetch successful Dec 16 13:03:18.387234 coreos-metadata[1721]: Dec 16 13:03:18.385 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:03:18.398211 coreos-metadata[1721]: Dec 16 13:03:18.398 INFO Fetch successful Dec 16 13:03:18.431495 coreos-metadata[1721]: Dec 16 13:03:18.431 INFO wrote hostname ci-4515.1.0-a-bc3c22631a to /sysroot/etc/hostname Dec 16 13:03:18.433637 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:03:18.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:18.439832 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 13:03:18.439865 kernel: audit: type=1130 audit(1765890198.436:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:18.591839 initrd-setup-root[1749]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:03:18.642910 initrd-setup-root[1756]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:03:18.680489 initrd-setup-root[1763]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:03:18.703408 initrd-setup-root[1770]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:03:19.682201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:19.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.688967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:03:19.692702 kernel: audit: type=1130 audit(1765890199.682:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.693280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:03:19.722286 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:03:19.726235 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:19.741013 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:03:19.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.751176 kernel: audit: type=1130 audit(1765890199.744:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.757769 ignition[1839]: INFO : Ignition 2.22.0 Dec 16 13:03:19.757769 ignition[1839]: INFO : Stage: mount Dec 16 13:03:19.760068 ignition[1839]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:19.760068 ignition[1839]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:19.760068 ignition[1839]: INFO : mount: mount passed Dec 16 13:03:19.760068 ignition[1839]: INFO : Ignition finished successfully Dec 16 13:03:19.774233 kernel: audit: type=1130 audit(1765890199.764:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.762231 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:03:19.768205 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:03:19.788060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:19.810216 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1850) Dec 16 13:03:19.810250 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:19.813069 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:19.821318 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:19.821355 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:19.821440 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:19.823876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:19.856472 ignition[1867]: INFO : Ignition 2.22.0 Dec 16 13:03:19.856472 ignition[1867]: INFO : Stage: files Dec 16 13:03:19.859903 ignition[1867]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:19.859903 ignition[1867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:19.859903 ignition[1867]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:03:19.859903 ignition[1867]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:03:19.859903 ignition[1867]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:03:19.902898 ignition[1867]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:03:19.904565 ignition[1867]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:03:19.904565 ignition[1867]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:03:19.903253 unknown[1867]: wrote ssh authorized keys file for user: core Dec 16 13:03:19.947399 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:03:19.949835 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:03:20.150093 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:03:20.219291 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:03:20.219291 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:20.228224 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:03:20.250186 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:03:20.481888 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:03:20.700184 ignition[1867]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:03:20.700184 ignition[1867]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:03:20.752916 ignition[1867]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:20.759756 ignition[1867]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:20.759756 ignition[1867]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:03:20.766239 ignition[1867]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:20.766239 ignition[1867]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:20.766239 ignition[1867]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:20.766239 ignition[1867]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:20.766239 ignition[1867]: INFO : files: files passed Dec 16 13:03:20.766239 ignition[1867]: INFO : Ignition finished successfully Dec 16 13:03:20.790527 kernel: audit: type=1130 audit(1765890200.768:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.765245 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:03:20.770342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:03:20.778578 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:03:20.803210 initrd-setup-root-after-ignition[1897]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.803210 initrd-setup-root-after-ignition[1897]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.809625 initrd-setup-root-after-ignition[1901]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.806530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:20.811495 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:03:20.832526 kernel: audit: type=1130 audit(1765890200.810:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.832561 kernel: audit: type=1130 audit(1765890200.824:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.832588 kernel: audit: type=1131 audit(1765890200.824:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.811583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:03:20.826116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:03:20.836623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:03:20.874591 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:03:20.874684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:03:20.885114 kernel: audit: type=1130 audit(1765890200.876:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.885144 kernel: audit: type=1131 audit(1765890200.878:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.881334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:03:20.885645 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:03:20.889701 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:03:20.891989 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:03:20.907345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:20.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.910271 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:03:20.923880 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:20.924135 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:20.930282 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:20.931821 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:03:20.935323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:03:20.935460 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:20.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.941271 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:03:20.943577 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:03:20.947319 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:03:20.950283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:20.954308 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:20.958296 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:20.962300 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:03:20.966285 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:20.970320 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:03:20.974327 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:03:20.978300 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:03:20.981263 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:03:20.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.981395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:20.984420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:20.986423 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:20.992282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:03:20.992431 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:21.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.998270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:03:21.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.998394 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:21.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.002511 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:03:21.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.002642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:21.005608 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:03:21.005733 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:03:21.009346 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:03:21.009475 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:03:21.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.014375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:03:21.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.021667 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:03:21.027440 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:03:21.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.027596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:21.031694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:03:21.057018 ignition[1923]: INFO : Ignition 2.22.0 Dec 16 13:03:21.057018 ignition[1923]: INFO : Stage: umount Dec 16 13:03:21.057018 ignition[1923]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:21.057018 ignition[1923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:21.057018 ignition[1923]: INFO : umount: umount passed Dec 16 13:03:21.057018 ignition[1923]: INFO : Ignition finished successfully Dec 16 13:03:21.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.031832 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:21.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.040616 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:03:21.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.040732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:21.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.056859 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:03:21.056947 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:03:21.065812 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:03:21.065920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:03:21.069700 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:03:21.069783 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:03:21.075265 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:03:21.075318 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:03:21.077598 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:03:21.077634 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:03:21.080251 systemd[1]: Stopped target network.target - Network. Dec 16 13:03:21.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.082922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:03:21.082974 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:21.084599 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:03:21.089212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:03:21.089560 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:21.097815 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:03:21.101676 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:03:21.105539 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:03:21.105579 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:21.109092 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:03:21.109124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:21.111243 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:03:21.111270 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:21.113360 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:03:21.113409 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:03:21.115661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:03:21.115699 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:21.118317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:03:21.122267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:03:21.128381 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:03:21.133071 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:03:21.138000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:03:21.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.133202 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:03:21.154918 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:03:21.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.157000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:03:21.155028 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:03:21.158577 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:03:21.160014 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:03:21.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.160053 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:21.163029 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:03:21.169205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:03:21.169264 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:21.169364 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:03:21.169405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:21.169747 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:03:21.169778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:21.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.170094 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:21.193504 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:03:21.193621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:21.197385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:03:21.197416 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:21.202255 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:03:21.202290 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:21.204422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:03:21.204469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:21.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.213229 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:03:21.213277 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:21.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.219196 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:03:21.219246 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:21.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.223711 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:03:21.226324 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:03:21.226388 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:21.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.233401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:03:21.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.233449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:21.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.236267 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:03:21.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.236315 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:21.240276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:03:21.240325 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:21.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.244604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:21.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.244644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:21.247750 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:03:21.247815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:03:21.252135 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:03:21.252217 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:03:21.262992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:03:21.263044 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:21.314836 kernel: hv_netvsc f8615163-0000-1000-2000-002248402cb2 eth0: Data path switched from VF: enP30832s1 Dec 16 13:03:21.315103 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:21.317215 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:03:21.317304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:03:21.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:21.321605 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:03:21.324604 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:03:21.351601 systemd[1]: Switching root. Dec 16 13:03:21.444829 systemd-journald[1055]: Journal stopped Dec 16 13:03:26.116561 systemd-journald[1055]: Received SIGTERM from PID 1 (systemd). Dec 16 13:03:26.116585 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:03:26.116598 kernel: SELinux: policy capability open_perms=1 Dec 16 13:03:26.116604 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:03:26.116611 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:03:26.116617 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:03:26.116624 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:03:26.116632 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:03:26.116639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:03:26.116645 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:03:26.116652 systemd[1]: Successfully loaded SELinux policy in 196.775ms. Dec 16 13:03:26.116659 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.702ms. Dec 16 13:03:26.116667 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:26.116676 systemd[1]: Detected virtualization microsoft. Dec 16 13:03:26.116684 systemd[1]: Detected architecture x86-64. Dec 16 13:03:26.116691 systemd[1]: Detected first boot. Dec 16 13:03:26.116698 systemd[1]: Hostname set to . Dec 16 13:03:26.116707 systemd[1]: Initializing machine ID from random generator. Dec 16 13:03:26.116715 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 16 13:03:26.116722 kernel: audit: type=1334 audit(1765890203.591:90): prog-id=10 op=LOAD Dec 16 13:03:26.116729 kernel: audit: type=1334 audit(1765890203.591:91): prog-id=10 op=UNLOAD Dec 16 13:03:26.116735 kernel: audit: type=1334 audit(1765890203.591:92): prog-id=11 op=LOAD Dec 16 13:03:26.116742 kernel: audit: type=1334 audit(1765890203.591:93): prog-id=11 op=UNLOAD Dec 16 13:03:26.116751 zram_generator::config[1966]: No configuration found. Dec 16 13:03:26.116759 kernel: Guest personality initialized and is inactive Dec 16 13:03:26.116766 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:03:26.116773 kernel: Initialized host personality Dec 16 13:03:26.116779 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:03:26.116786 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:03:26.116793 kernel: audit: type=1334 audit(1765890205.483:94): prog-id=12 op=LOAD Dec 16 13:03:26.116801 kernel: audit: type=1334 audit(1765890205.483:95): prog-id=3 op=UNLOAD Dec 16 13:03:26.116808 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:03:26.116815 kernel: audit: type=1334 audit(1765890205.483:96): prog-id=13 op=LOAD Dec 16 13:03:26.116822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:03:26.116829 kernel: audit: type=1334 audit(1765890205.483:97): prog-id=14 op=LOAD Dec 16 13:03:26.116835 kernel: audit: type=1334 audit(1765890205.483:98): prog-id=4 op=UNLOAD Dec 16 13:03:26.116843 kernel: audit: type=1334 audit(1765890205.483:99): prog-id=5 op=UNLOAD Dec 16 13:03:26.116850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:26.116862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:03:26.116870 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:03:26.116880 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:03:26.116888 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:03:26.116897 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:03:26.116905 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:03:26.116912 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:03:26.116919 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:03:26.116926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:26.116934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:26.116942 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:03:26.116951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:03:26.116958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:03:26.116966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:26.116973 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:03:26.116981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:26.116988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:26.116997 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:03:26.117004 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:03:26.117011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:26.117018 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:03:26.117025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:26.117033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:26.117041 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:03:26.117048 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:26.117055 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:26.117063 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:03:26.117070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:03:26.117080 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:03:26.117087 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:26.117094 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:03:26.117102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:26.117109 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:03:26.117117 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:03:26.117126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:26.117133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:26.117141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:03:26.117148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:03:26.117165 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:03:26.117172 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:03:26.117179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:26.117188 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:03:26.117195 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:03:26.117203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:03:26.117210 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:03:26.117218 systemd[1]: Reached target machines.target - Containers. Dec 16 13:03:26.117225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:03:26.117235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:26.117243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:26.117250 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:03:26.117257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:26.117265 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:26.117272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:26.117279 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:03:26.117288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:26.117296 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:03:26.117304 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:03:26.117312 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:03:26.117319 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:03:26.117326 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:03:26.117334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:26.117343 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:26.117351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:26.117358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:26.117366 kernel: fuse: init (API version 7.41) Dec 16 13:03:26.117373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:03:26.117381 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:03:26.117390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:26.117398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:26.117405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:03:26.117412 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:03:26.117420 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:03:26.117427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:03:26.117434 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:03:26.117443 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:03:26.117451 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:03:26.117458 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:26.117465 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:03:26.117472 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:03:26.117480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:26.117492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:26.117500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:26.117510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:26.117517 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:03:26.117525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:03:26.117533 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:26.117540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:26.117548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:26.117555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:03:26.117565 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:26.117572 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:03:26.117580 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:03:26.117588 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:03:26.117598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:03:26.117605 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:26.117613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:03:26.117621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:26.117629 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:26.117645 systemd-journald[2064]: Collecting audit messages is enabled. Dec 16 13:03:26.117664 systemd-journald[2064]: Journal started Dec 16 13:03:26.117684 systemd-journald[2064]: Runtime Journal (/run/log/journal/30003b7c1f2a48ba89e7e3ca6508fbc3) is 8M, max 158.5M, 150.5M free. Dec 16 13:03:25.618000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:03:25.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.876000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:03:25.876000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:03:25.878000 audit: BPF prog-id=15 op=LOAD Dec 16 13:03:25.878000 audit: BPF prog-id=16 op=LOAD Dec 16 13:03:25.878000 audit: BPF prog-id=17 op=LOAD Dec 16 13:03:25.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.113000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:03:26.113000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff6df54cc0 a2=4000 a3=0 items=0 ppid=1 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:03:26.113000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:03:25.476285 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:03:25.485237 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:03:25.485635 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:03:26.139177 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:03:26.144175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:26.153255 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:03:26.163889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:26.171190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:03:26.179179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:26.184178 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:26.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.186614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:26.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.191243 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:03:26.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.194063 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:03:26.201109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:03:26.214320 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:03:26.219334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:26.238247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:26.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.318952 kernel: ACPI: bus type drm_connector registered Dec 16 13:03:26.318999 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 13:03:26.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.316266 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:26.316460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:26.322483 systemd-journald[2064]: Time spent on flushing to /var/log/journal/30003b7c1f2a48ba89e7e3ca6508fbc3 is 13.662ms for 1140 entries. Dec 16 13:03:26.322483 systemd-journald[2064]: System Journal (/var/log/journal/30003b7c1f2a48ba89e7e3ca6508fbc3) is 8M, max 2.2G, 2.2G free. Dec 16 13:03:27.847486 systemd-journald[2064]: Received client request to flush runtime journal. Dec 16 13:03:27.847581 kernel: loop2: detected capacity change from 0 to 27736 Dec 16 13:03:26.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.392967 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:03:26.394700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:03:26.398193 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:03:26.406693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:26.408841 systemd-tmpfiles[2089]: ACLs are not supported, ignoring. Dec 16 13:03:26.408849 systemd-tmpfiles[2089]: ACLs are not supported, ignoring. Dec 16 13:03:26.411857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:26.416196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:03:27.849322 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:03:27.894664 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:03:27.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.897000 audit: BPF prog-id=18 op=LOAD Dec 16 13:03:27.898000 audit: BPF prog-id=19 op=LOAD Dec 16 13:03:27.898000 audit: BPF prog-id=20 op=LOAD Dec 16 13:03:27.899774 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:03:27.902000 audit: BPF prog-id=21 op=LOAD Dec 16 13:03:27.905294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:27.910835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:27.925105 systemd-tmpfiles[2128]: ACLs are not supported, ignoring. Dec 16 13:03:27.925123 systemd-tmpfiles[2128]: ACLs are not supported, ignoring. Dec 16 13:03:27.927903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:27.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.256000 audit: BPF prog-id=22 op=LOAD Dec 16 13:03:28.256000 audit: BPF prog-id=23 op=LOAD Dec 16 13:03:28.256000 audit: BPF prog-id=24 op=LOAD Dec 16 13:03:28.258097 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:03:28.260000 audit: BPF prog-id=25 op=LOAD Dec 16 13:03:28.260000 audit: BPF prog-id=26 op=LOAD Dec 16 13:03:28.260000 audit: BPF prog-id=27 op=LOAD Dec 16 13:03:28.264304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:03:28.349386 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:03:28.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.363189 systemd-nsresourced[2131]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:03:28.364599 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:03:28.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.478148 systemd-oomd[2126]: No swap; memory pressure usage will be degraded Dec 16 13:03:28.478956 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:03:28.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.620904 systemd-resolved[2127]: Positive Trust Anchors: Dec 16 13:03:28.620923 systemd-resolved[2127]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:28.620927 systemd-resolved[2127]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:03:28.620959 systemd-resolved[2127]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:29.326192 kernel: loop3: detected capacity change from 0 to 111544 Dec 16 13:03:29.519072 systemd-resolved[2127]: Using system hostname 'ci-4515.1.0-a-bc3c22631a'. Dec 16 13:03:29.520715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:29.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:29.522337 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:29.524217 kernel: kauditd_printk_skb: 54 callbacks suppressed Dec 16 13:03:29.524268 kernel: audit: type=1130 audit(1765890209.520:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:30.761191 kernel: loop4: detected capacity change from 0 to 119256 Dec 16 13:03:31.604433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:03:31.650521 kernel: audit: type=1130 audit(1765890211.606:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:31.650593 kernel: audit: type=1334 audit(1765890211.606:154): prog-id=8 op=UNLOAD Dec 16 13:03:31.650612 kernel: audit: type=1334 audit(1765890211.606:155): prog-id=7 op=UNLOAD Dec 16 13:03:31.650636 kernel: audit: type=1334 audit(1765890211.606:156): prog-id=28 op=LOAD Dec 16 13:03:31.650653 kernel: audit: type=1334 audit(1765890211.606:157): prog-id=29 op=LOAD Dec 16 13:03:31.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:31.606000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:03:31.606000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:03:31.606000 audit: BPF prog-id=28 op=LOAD Dec 16 13:03:31.606000 audit: BPF prog-id=29 op=LOAD Dec 16 13:03:31.611324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:31.648373 systemd-udevd[2151]: Using default interface naming scheme 'v257'. Dec 16 13:03:32.056932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:32.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.066175 kernel: audit: type=1130 audit(1765890212.059:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.067345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:32.072430 kernel: audit: type=1334 audit(1765890212.059:159): prog-id=30 op=LOAD Dec 16 13:03:32.059000 audit: BPF prog-id=30 op=LOAD Dec 16 13:03:32.112896 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:03:32.266417 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:03:32.269484 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:03:32.269555 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:03:32.270770 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:03:32.274194 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:03:32.276805 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:03:32.276869 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:03:32.280176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:03:32.577194 systemd-networkd[2157]: lo: Link UP Dec 16 13:03:32.577205 systemd-networkd[2157]: lo: Gained carrier Dec 16 13:03:32.578663 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:32.584274 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:03:32.585940 kernel: audit: type=1130 audit(1765890212.580:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.579206 systemd-networkd[2157]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:32.579211 systemd-networkd[2157]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:32.581417 systemd[1]: Reached target network.target - Network. Dec 16 13:03:32.588314 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:03:32.589190 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:32.592267 kernel: hv_netvsc f8615163-0000-1000-2000-002248402cb2 eth0: Data path switched to VF: enP30832s1 Dec 16 13:03:32.591580 systemd-networkd[2157]: enP30832s1: Link UP Dec 16 13:03:32.591678 systemd-networkd[2157]: eth0: Link UP Dec 16 13:03:32.591682 systemd-networkd[2157]: eth0: Gained carrier Dec 16 13:03:32.591701 systemd-networkd[2157]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:32.592895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:03:32.598889 systemd-networkd[2157]: enP30832s1: Gained carrier Dec 16 13:03:32.609219 systemd-networkd[2157]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:32.830184 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:03:32.915279 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:03:32.924800 kernel: audit: type=1130 audit(1765890212.917:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:32.976310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:33.000278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:33.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:33.000666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:33.005306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:33.289187 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:03:34.058428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:03:34.061036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:03:34.588443 systemd-networkd[2157]: eth0: Gained IPv6LL Dec 16 13:03:34.591230 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:03:34.593677 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 13:03:34.593996 kernel: audit: type=1130 audit(1765890214.591:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:34.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:34.593630 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:03:34.609619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:03:34.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:34.614173 kernel: audit: type=1130 audit(1765890214.610:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:35.073850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:35.172524 kernel: audit: type=1130 audit(1765890215.074:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:35.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:35.658206 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 13:03:35.674186 kernel: loop6: detected capacity change from 0 to 27736 Dec 16 13:03:36.547709 kernel: loop7: detected capacity change from 0 to 111544 Dec 16 13:03:36.547813 kernel: loop1: detected capacity change from 0 to 119256 Dec 16 13:03:36.547834 zram_generator::config[2276]: No configuration found. Dec 16 13:03:36.454771 (sd-merge)[2240]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 13:03:36.458451 (sd-merge)[2240]: Merged extensions into '/usr'. Dec 16 13:03:36.462457 systemd[1]: Reload requested from client PID 2088 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:03:36.462467 systemd[1]: Reloading... Dec 16 13:03:36.716620 systemd[1]: Reloading finished in 253 ms. Dec 16 13:03:36.752169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:03:36.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:36.761249 kernel: audit: type=1130 audit(1765890216.754:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:36.762367 systemd[1]: Starting ensure-sysext.service... Dec 16 13:03:36.764329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:36.766000 audit: BPF prog-id=31 op=LOAD Dec 16 13:03:36.769234 kernel: audit: type=1334 audit(1765890216.766:168): prog-id=31 op=LOAD Dec 16 13:03:36.767000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:03:36.768000 audit: BPF prog-id=32 op=LOAD Dec 16 13:03:36.773456 kernel: audit: type=1334 audit(1765890216.767:169): prog-id=15 op=UNLOAD Dec 16 13:03:36.773551 kernel: audit: type=1334 audit(1765890216.768:170): prog-id=32 op=LOAD Dec 16 13:03:36.768000 audit: BPF prog-id=33 op=LOAD Dec 16 13:03:36.768000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:03:36.768000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:03:36.768000 audit: BPF prog-id=34 op=LOAD Dec 16 13:03:36.768000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:03:36.774184 kernel: audit: type=1334 audit(1765890216.768:171): prog-id=33 op=LOAD Dec 16 13:03:36.774230 kernel: audit: type=1334 audit(1765890216.768:172): prog-id=16 op=UNLOAD Dec 16 13:03:36.774248 kernel: audit: type=1334 audit(1765890216.768:173): prog-id=17 op=UNLOAD Dec 16 13:03:36.775000 audit: BPF prog-id=35 op=LOAD Dec 16 13:03:36.775000 audit: BPF prog-id=36 op=LOAD Dec 16 13:03:36.775000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:03:36.775000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:03:36.775000 audit: BPF prog-id=37 op=LOAD Dec 16 13:03:36.775000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:03:36.775000 audit: BPF prog-id=38 op=LOAD Dec 16 13:03:36.775000 audit: BPF prog-id=39 op=LOAD Dec 16 13:03:36.775000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:03:36.775000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:03:36.776000 audit: BPF prog-id=40 op=LOAD Dec 16 13:03:36.776000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:03:36.778000 audit: BPF prog-id=41 op=LOAD Dec 16 13:03:36.778000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:03:36.778000 audit: BPF prog-id=42 op=LOAD Dec 16 13:03:36.778000 audit: BPF prog-id=43 op=LOAD Dec 16 13:03:36.778000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:03:36.778000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:03:36.779000 audit: BPF prog-id=44 op=LOAD Dec 16 13:03:36.779000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:03:36.779000 audit: BPF prog-id=45 op=LOAD Dec 16 13:03:36.779000 audit: BPF prog-id=46 op=LOAD Dec 16 13:03:36.779000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:03:36.779000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:03:36.789209 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:03:36.789307 systemd[1]: Reloading... Dec 16 13:03:36.815522 systemd-tmpfiles[2332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:03:36.815550 systemd-tmpfiles[2332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:03:36.815811 systemd-tmpfiles[2332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:03:36.819548 systemd-tmpfiles[2332]: ACLs are not supported, ignoring. Dec 16 13:03:36.819726 systemd-tmpfiles[2332]: ACLs are not supported, ignoring. Dec 16 13:03:36.841078 systemd-tmpfiles[2332]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:36.841196 systemd-tmpfiles[2332]: Skipping /boot Dec 16 13:03:36.853243 zram_generator::config[2366]: No configuration found. Dec 16 13:03:36.853823 systemd-tmpfiles[2332]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:36.853839 systemd-tmpfiles[2332]: Skipping /boot Dec 16 13:03:37.033362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:03:37.033758 systemd[1]: Reloading finished in 244 ms. Dec 16 13:03:37.055997 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:03:37.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.057000 audit: BPF prog-id=47 op=LOAD Dec 16 13:03:37.057000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:03:37.057000 audit: BPF prog-id=48 op=LOAD Dec 16 13:03:37.057000 audit: BPF prog-id=49 op=LOAD Dec 16 13:03:37.057000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:03:37.057000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:03:37.058000 audit: BPF prog-id=50 op=LOAD Dec 16 13:03:37.058000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:03:37.059000 audit: BPF prog-id=51 op=LOAD Dec 16 13:03:37.059000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:03:37.059000 audit: BPF prog-id=52 op=LOAD Dec 16 13:03:37.059000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:03:37.060000 audit: BPF prog-id=53 op=LOAD Dec 16 13:03:37.060000 audit: BPF prog-id=54 op=LOAD Dec 16 13:03:37.060000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:03:37.060000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:03:37.060000 audit: BPF prog-id=55 op=LOAD Dec 16 13:03:37.060000 audit: BPF prog-id=56 op=LOAD Dec 16 13:03:37.060000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:03:37.060000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:03:37.061000 audit: BPF prog-id=57 op=LOAD Dec 16 13:03:37.061000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:03:37.061000 audit: BPF prog-id=58 op=LOAD Dec 16 13:03:37.061000 audit: BPF prog-id=59 op=LOAD Dec 16 13:03:37.061000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:03:37.061000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:03:37.062000 audit: BPF prog-id=60 op=LOAD Dec 16 13:03:37.062000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:03:37.062000 audit: BPF prog-id=61 op=LOAD Dec 16 13:03:37.062000 audit: BPF prog-id=62 op=LOAD Dec 16 13:03:37.062000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:03:37.063000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:03:37.073201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:37.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.083527 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:37.088073 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:03:37.092313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:03:37.096633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:03:37.100337 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:03:37.106645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.106831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:37.112870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:37.130174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:37.134384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:37.136126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:37.136314 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:37.136408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:37.136502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.137505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:37.139206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:37.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.145901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:37.146083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:37.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.148391 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:37.148556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:37.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.154173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.154432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:37.155446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:37.160714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:37.161000 audit[2431]: SYSTEM_BOOT pid=2431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.165227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:37.168270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:37.168442 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:37.168537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:37.168625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.169541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:37.175359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:37.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.178736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:37.178884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:37.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.183607 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:37.183782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:37.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.192885 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:03:37.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.198284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.198500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:37.200314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:37.205978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:37.210875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:37.218515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:37.220778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:37.220875 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:37.220918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:37.220972 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:03:37.225275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:37.225860 systemd[1]: Finished ensure-sysext.service. Dec 16 13:03:37.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.226943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:37.227106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:37.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.230467 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:37.230619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:37.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.241441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:37.241599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:37.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.245412 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:37.245558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:37.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:37.250457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:37.250518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:37.402110 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:03:37.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:38.077000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:03:38.077000 audit[2474]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddc5b00c0 a2=420 a3=0 items=0 ppid=2427 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:03:38.077000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:03:38.079533 augenrules[2474]: No rules Dec 16 13:03:38.079995 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:38.080348 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:39.701660 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:03:39.706477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:03:45.932355 ldconfig[2429]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:03:45.946139 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:03:45.949509 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:03:45.968938 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:03:45.970568 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:45.973443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:03:45.974674 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:03:45.977220 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:03:45.980336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:03:45.983279 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:03:45.986233 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:03:45.987706 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:03:45.990216 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:03:45.991758 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:03:45.991795 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:45.995212 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:45.999451 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:03:46.003332 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:03:46.007780 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:03:46.009640 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:03:46.012226 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:03:46.025616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:03:46.028475 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:03:46.031818 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:03:46.036100 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:46.037392 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:46.040245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:46.040278 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:46.058050 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:03:46.062261 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:03:46.066871 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:03:46.072396 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:03:46.077349 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:03:46.087308 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:03:46.094074 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:03:46.095987 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:03:46.097121 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:03:46.100521 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:03:46.103370 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:03:46.105025 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:03:46.106655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:46.109124 jq[2491]: false Dec 16 13:03:46.112341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:03:46.126757 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:03:46.129591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:03:46.132593 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:03:46.138694 chronyd[2486]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:03:46.139040 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:03:46.145273 chronyd[2486]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:03:46.146323 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Refreshing passwd entry cache Dec 16 13:03:46.145414 chronyd[2486]: Loaded seccomp filter (level 2) Dec 16 13:03:46.145896 oslogin_cache_refresh[2496]: Refreshing passwd entry cache Dec 16 13:03:46.146796 KVP[2497]: KVP starting; pid is:2497 Dec 16 13:03:46.151431 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:03:46.151181 KVP[2497]: KVP LIC Version: 3.1 Dec 16 13:03:46.149738 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:03:46.151625 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:03:46.152069 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:03:46.153016 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:03:46.160136 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:03:46.163713 extend-filesystems[2495]: Found /dev/nvme0n1p6 Dec 16 13:03:46.165590 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:03:46.172706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:03:46.175301 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:03:46.175506 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:03:46.176919 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Failure getting users, quitting Dec 16 13:03:46.176919 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:46.176919 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Refreshing group entry cache Dec 16 13:03:46.176477 oslogin_cache_refresh[2496]: Failure getting users, quitting Dec 16 13:03:46.176496 oslogin_cache_refresh[2496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:46.176539 oslogin_cache_refresh[2496]: Refreshing group entry cache Dec 16 13:03:46.183289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:03:46.186050 extend-filesystems[2495]: Found /dev/nvme0n1p9 Dec 16 13:03:46.183598 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:03:46.190520 extend-filesystems[2495]: Checking size of /dev/nvme0n1p9 Dec 16 13:03:46.201856 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Failure getting groups, quitting Dec 16 13:03:46.201856 google_oslogin_nss_cache[2496]: oslogin_cache_refresh[2496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:46.201483 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:03:46.200058 oslogin_cache_refresh[2496]: Failure getting groups, quitting Dec 16 13:03:46.201771 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:03:46.200068 oslogin_cache_refresh[2496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:46.203499 jq[2511]: true Dec 16 13:03:46.208499 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:03:46.212778 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:03:46.223661 extend-filesystems[2495]: Resized partition /dev/nvme0n1p9 Dec 16 13:03:46.230383 jq[2534]: true Dec 16 13:03:46.242469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:03:46.245382 update_engine[2510]: I20251216 13:03:46.244918 2510 main.cc:92] Flatcar Update Engine starting Dec 16 13:03:46.268172 extend-filesystems[2556]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:03:46.279213 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 13:03:46.282204 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 16 13:03:46.345895 extend-filesystems[2556]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:03:46.345895 extend-filesystems[2556]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 13:03:46.345895 extend-filesystems[2556]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 16 13:03:46.356357 extend-filesystems[2495]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:03:46.364501 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:03:46.364758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:03:46.377545 tar[2523]: linux-amd64/LICENSE Dec 16 13:03:46.377545 tar[2523]: linux-amd64/helm Dec 16 13:03:46.381399 systemd-logind[2509]: New seat seat0. Dec 16 13:03:46.382245 systemd-logind[2509]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 13:03:46.382488 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:03:46.413664 dbus-daemon[2489]: [system] SELinux support is enabled Dec 16 13:03:46.420446 bash[2572]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:03:46.413866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:03:46.422909 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:03:46.445589 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:03:46.445662 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:03:46.445684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:03:46.449265 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:03:46.449293 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:03:46.451571 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:03:46.457002 update_engine[2510]: I20251216 13:03:46.455359 2510 update_check_scheduler.cc:74] Next update check in 2m50s Dec 16 13:03:46.460455 dbus-daemon[2489]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:03:46.475345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:03:46.533342 coreos-metadata[2488]: Dec 16 13:03:46.533 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:03:46.536426 coreos-metadata[2488]: Dec 16 13:03:46.536 INFO Fetch successful Dec 16 13:03:46.536501 coreos-metadata[2488]: Dec 16 13:03:46.536 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:03:46.540217 coreos-metadata[2488]: Dec 16 13:03:46.540 INFO Fetch successful Dec 16 13:03:46.540630 coreos-metadata[2488]: Dec 16 13:03:46.540 INFO Fetching http://168.63.129.16/machine/060b4bfa-017e-4892-b05f-6fc434b1e704/459fa774%2D60fa%2D4ddd%2D8649%2Db52f26440549.%5Fci%2D4515.1.0%2Da%2Dbc3c22631a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:03:46.542501 coreos-metadata[2488]: Dec 16 13:03:46.542 INFO Fetch successful Dec 16 13:03:46.542796 coreos-metadata[2488]: Dec 16 13:03:46.542 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:03:46.553660 coreos-metadata[2488]: Dec 16 13:03:46.552 INFO Fetch successful Dec 16 13:03:46.554061 sshd_keygen[2522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:03:46.605682 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:03:46.615238 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:03:46.628423 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:03:46.631312 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:03:46.635246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:03:46.651670 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:03:46.651921 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:03:46.657366 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:03:46.701617 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:03:46.725530 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:03:46.728616 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:03:46.733148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:03:46.735307 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:03:46.741700 locksmithd[2602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:03:46.926112 tar[2523]: linux-amd64/README.md Dec 16 13:03:46.943929 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:03:47.324200 containerd[2529]: time="2025-12-16T13:03:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:03:47.324200 containerd[2529]: time="2025-12-16T13:03:47.323280094Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334125805Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.794µs" Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334170270Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334212254Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334225547Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334361086Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334374403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334424986Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334435230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334649646Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334661132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334670798Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:47.334815 containerd[2529]: time="2025-12-16T13:03:47.334678469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336165173Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336218684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336351873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336557893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336590044Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336606581Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:03:47.336962 containerd[2529]: time="2025-12-16T13:03:47.336637446Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:03:47.337620 containerd[2529]: time="2025-12-16T13:03:47.337351694Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:03:47.337620 containerd[2529]: time="2025-12-16T13:03:47.337411741Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:03:47.352788 containerd[2529]: time="2025-12-16T13:03:47.352524889Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:03:47.352788 containerd[2529]: time="2025-12-16T13:03:47.352584667Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:03:47.352909 containerd[2529]: time="2025-12-16T13:03:47.352845128Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:03:47.352909 containerd[2529]: time="2025-12-16T13:03:47.352865969Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:03:47.352909 containerd[2529]: time="2025-12-16T13:03:47.352879649Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:03:47.352909 containerd[2529]: time="2025-12-16T13:03:47.352893604Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:03:47.352909 containerd[2529]: time="2025-12-16T13:03:47.352906926Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352918160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352932384Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352953075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352966967Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352979702Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.352990571Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:03:47.353009 containerd[2529]: time="2025-12-16T13:03:47.353003923Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:03:47.353142 containerd[2529]: time="2025-12-16T13:03:47.353125417Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:03:47.353216 containerd[2529]: time="2025-12-16T13:03:47.353150037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:03:47.353216 containerd[2529]: time="2025-12-16T13:03:47.353191440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:03:47.353216 containerd[2529]: time="2025-12-16T13:03:47.353209323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353222628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353233680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353246442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353255107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353265502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353292040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353303600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:03:47.353333 containerd[2529]: time="2025-12-16T13:03:47.353328262Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:03:47.354268 containerd[2529]: time="2025-12-16T13:03:47.353388798Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:03:47.354268 containerd[2529]: time="2025-12-16T13:03:47.353408122Z" level=info msg="Start snapshots syncer" Dec 16 13:03:47.354268 containerd[2529]: time="2025-12-16T13:03:47.353430885Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353714210Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353759308Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353802289Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353886544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353915223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353927643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353938301Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353949640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353959524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353969882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353980532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.353990318Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354024722Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354048134Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354057312Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354066593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354074766Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354087160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354098624Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354113586Z" level=info msg="runtime interface created" Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354118417Z" level=info msg="created NRI interface" Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354126614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354139544Z" level=info msg="Connect containerd service" Dec 16 13:03:47.354328 containerd[2529]: time="2025-12-16T13:03:47.354173783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:03:47.354873 containerd[2529]: time="2025-12-16T13:03:47.354854889Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:03:47.600250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:47.605110 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913689911Z" level=info msg="Start subscribing containerd event" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913770871Z" level=info msg="Start recovering state" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913915304Z" level=info msg="Start event monitor" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913929298Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913940518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913980972Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.913947244Z" level=info msg="Start streaming server" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.914000809Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.914010187Z" level=info msg="runtime interface starting up..." Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.914016852Z" level=info msg="starting plugins..." Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.914031206Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:03:47.915181 containerd[2529]: time="2025-12-16T13:03:47.914151924Z" level=info msg="containerd successfully booted in 0.593500s" Dec 16 13:03:47.914424 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:03:47.916795 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:03:47.920120 systemd[1]: Startup finished in 4.493s (kernel) + 10.911s (initrd) + 25.180s (userspace) = 40.584s. Dec 16 13:03:48.151926 kubelet[2659]: E1216 13:03:48.151832 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:03:48.154167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:03:48.154305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:03:48.154710 systemd[1]: kubelet.service: Consumed 901ms CPU time, 262.2M memory peak. Dec 16 13:03:48.526046 login[2637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:03:48.527873 login[2638]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:03:48.533610 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:03:48.535529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:03:48.541313 systemd-logind[2509]: New session 2 of user core. Dec 16 13:03:48.546486 systemd-logind[2509]: New session 1 of user core. Dec 16 13:03:48.557604 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:03:48.561482 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:03:48.587716 (systemd)[2679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:03:48.589626 systemd-logind[2509]: New session c1 of user core. Dec 16 13:03:48.646257 waagent[2635]: 2025-12-16T13:03:48.646185Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:03:48.646787 waagent[2635]: 2025-12-16T13:03:48.646755Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 13:03:48.646911 waagent[2635]: 2025-12-16T13:03:48.646891Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:03:48.647175 waagent[2635]: 2025-12-16T13:03:48.647134Z INFO Daemon Daemon Run daemon Dec 16 13:03:48.647358 waagent[2635]: 2025-12-16T13:03:48.647339Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 13:03:48.647527 waagent[2635]: 2025-12-16T13:03:48.647511Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:03:48.647907 waagent[2635]: 2025-12-16T13:03:48.647887Z INFO Daemon Daemon Activate resource disk Dec 16 13:03:48.648114 waagent[2635]: 2025-12-16T13:03:48.648098Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:03:48.650038 waagent[2635]: 2025-12-16T13:03:48.650000Z INFO Daemon Daemon Found device: None Dec 16 13:03:48.650281 waagent[2635]: 2025-12-16T13:03:48.650264Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:03:48.650544 waagent[2635]: 2025-12-16T13:03:48.650527Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:03:48.651362 waagent[2635]: 2025-12-16T13:03:48.651334Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:03:48.651521 waagent[2635]: 2025-12-16T13:03:48.651505Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:03:48.675613 waagent[2635]: 2025-12-16T13:03:48.675570Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:03:48.676228 waagent[2635]: 2025-12-16T13:03:48.676196Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:03:48.676554 waagent[2635]: 2025-12-16T13:03:48.676533Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:03:48.676818 waagent[2635]: 2025-12-16T13:03:48.676802Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:03:48.738576 systemd[2679]: Queued start job for default target default.target. Dec 16 13:03:48.747895 systemd[2679]: Created slice app.slice - User Application Slice. Dec 16 13:03:48.747926 systemd[2679]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:03:48.747939 systemd[2679]: Reached target paths.target - Paths. Dec 16 13:03:48.747974 systemd[2679]: Reached target timers.target - Timers. Dec 16 13:03:48.748985 systemd[2679]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:03:48.749759 systemd[2679]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:03:48.767842 waagent[2635]: 2025-12-16T13:03:48.764656Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:03:48.771228 systemd[2679]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:03:48.775818 systemd[2679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:03:48.775890 systemd[2679]: Reached target sockets.target - Sockets. Dec 16 13:03:48.775924 systemd[2679]: Reached target basic.target - Basic System. Dec 16 13:03:48.775953 systemd[2679]: Reached target default.target - Main User Target. Dec 16 13:03:48.775978 systemd[2679]: Startup finished in 180ms. Dec 16 13:03:48.776088 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:03:48.780340 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:03:48.781007 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:03:48.798554 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:03:48.800580 waagent[2635]: 2025-12-16T13:03:48.800529Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:03:48.802215 waagent[2635]: 2025-12-16T13:03:48.800705Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:03:48.802215 waagent[2635]: 2025-12-16T13:03:48.800928Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:03:48.802215 waagent[2635]: 2025-12-16T13:03:48.801303Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:03:48.802215 waagent[2635]: 2025-12-16T13:03:48.801677Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:03:48.802215 waagent[2635]: 2025-12-16T13:03:48.801929Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:03:48.831576 waagent[2635]: 2025-12-16T13:03:48.828324Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:03:48.831576 waagent[2635]: 2025-12-16T13:03:48.828618Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:03:48.831576 waagent[2635]: 2025-12-16T13:03:48.828698Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:03:48.905180 waagent[2635]: 2025-12-16T13:03:48.905095Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:03:48.906659 waagent[2635]: 2025-12-16T13:03:48.905430Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:03:48.912320 waagent[2635]: 2025-12-16T13:03:48.912283Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:03:48.927260 waagent[2635]: 2025-12-16T13:03:48.927229Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Dec 16 13:03:48.927931 waagent[2635]: 2025-12-16T13:03:48.927798Z INFO Daemon Dec 16 13:03:48.929412 waagent[2635]: 2025-12-16T13:03:48.927935Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5cd75ae7-e52d-4ed3-bf57-0fb9f744a933 eTag: 10708276034318179148 source: Fabric] Dec 16 13:03:48.929412 waagent[2635]: 2025-12-16T13:03:48.928239Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:03:48.929412 waagent[2635]: 2025-12-16T13:03:48.928655Z INFO Daemon Dec 16 13:03:48.929412 waagent[2635]: 2025-12-16T13:03:48.928813Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:03:48.934436 waagent[2635]: 2025-12-16T13:03:48.933757Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:03:48.996484 waagent[2635]: 2025-12-16T13:03:48.996432Z INFO Daemon Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:03:48.997912 waagent[2635]: 2025-12-16T13:03:48.996907Z INFO Daemon Fetch goal state completed Dec 16 13:03:49.003137 waagent[2635]: 2025-12-16T13:03:49.003077Z INFO Daemon Daemon Starting provisioning Dec 16 13:03:49.003569 waagent[2635]: 2025-12-16T13:03:49.003247Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:03:49.004132 waagent[2635]: 2025-12-16T13:03:49.003568Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-bc3c22631a] Dec 16 13:03:49.007247 waagent[2635]: 2025-12-16T13:03:49.007211Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-bc3c22631a] Dec 16 13:03:49.007862 waagent[2635]: 2025-12-16T13:03:49.007490Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:03:49.008727 waagent[2635]: 2025-12-16T13:03:49.008005Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:03:49.014964 systemd-networkd[2157]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:49.014971 systemd-networkd[2157]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:49.015038 systemd-networkd[2157]: eth0: DHCP lease lost Dec 16 13:03:49.027574 waagent[2635]: 2025-12-16T13:03:49.027495Z INFO Daemon Daemon Create user account if not exists Dec 16 13:03:49.027845 waagent[2635]: 2025-12-16T13:03:49.027813Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:03:49.027915 waagent[2635]: 2025-12-16T13:03:49.027894Z INFO Daemon Daemon Configure sudoer Dec 16 13:03:49.034076 waagent[2635]: 2025-12-16T13:03:49.034025Z INFO Daemon Daemon Configure sshd Dec 16 13:03:49.038600 waagent[2635]: 2025-12-16T13:03:49.038560Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:03:49.038817 waagent[2635]: 2025-12-16T13:03:49.038709Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:03:49.040509 systemd-networkd[2157]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:50.131837 waagent[2635]: 2025-12-16T13:03:50.131777Z INFO Daemon Daemon Provisioning complete Dec 16 13:03:50.142378 waagent[2635]: 2025-12-16T13:03:50.142337Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:03:50.143694 waagent[2635]: 2025-12-16T13:03:50.142604Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:03:50.143694 waagent[2635]: 2025-12-16T13:03:50.142796Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:03:50.246072 waagent[2729]: 2025-12-16T13:03:50.245985Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:03:50.246447 waagent[2729]: 2025-12-16T13:03:50.246099Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 13:03:50.246447 waagent[2729]: 2025-12-16T13:03:50.246143Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:03:50.246447 waagent[2729]: 2025-12-16T13:03:50.246206Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:03:50.288552 waagent[2729]: 2025-12-16T13:03:50.288497Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:03:50.288696 waagent[2729]: 2025-12-16T13:03:50.288654Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:50.288761 waagent[2729]: 2025-12-16T13:03:50.288724Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:50.295708 waagent[2729]: 2025-12-16T13:03:50.295658Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:03:50.301474 waagent[2729]: 2025-12-16T13:03:50.301443Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Dec 16 13:03:50.301788 waagent[2729]: 2025-12-16T13:03:50.301761Z INFO ExtHandler Dec 16 13:03:50.301827 waagent[2729]: 2025-12-16T13:03:50.301809Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 47682d35-d605-4b1a-880f-99f706cda37f eTag: 10708276034318179148 source: Fabric] Dec 16 13:03:50.302020 waagent[2729]: 2025-12-16T13:03:50.301994Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:03:50.302365 waagent[2729]: 2025-12-16T13:03:50.302338Z INFO ExtHandler Dec 16 13:03:50.302399 waagent[2729]: 2025-12-16T13:03:50.302378Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:03:50.305295 waagent[2729]: 2025-12-16T13:03:50.305268Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:03:50.367809 waagent[2729]: 2025-12-16T13:03:50.367754Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:03:50.368209 waagent[2729]: 2025-12-16T13:03:50.368142Z INFO ExtHandler Fetch goal state completed Dec 16 13:03:50.381712 waagent[2729]: 2025-12-16T13:03:50.381665Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 13:03:50.385753 waagent[2729]: 2025-12-16T13:03:50.385669Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2729 Dec 16 13:03:50.385843 waagent[2729]: 2025-12-16T13:03:50.385808Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:03:50.386107 waagent[2729]: 2025-12-16T13:03:50.386082Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:03:50.387214 waagent[2729]: 2025-12-16T13:03:50.387146Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:03:50.387498 waagent[2729]: 2025-12-16T13:03:50.387468Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:03:50.387604 waagent[2729]: 2025-12-16T13:03:50.387581Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:03:50.388010 waagent[2729]: 2025-12-16T13:03:50.387982Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:03:50.405616 waagent[2729]: 2025-12-16T13:03:50.405589Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:03:50.405754 waagent[2729]: 2025-12-16T13:03:50.405733Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:03:50.411805 waagent[2729]: 2025-12-16T13:03:50.411428Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:03:50.417009 systemd[1]: Reload requested from client PID 2744 ('systemctl') (unit waagent.service)... Dec 16 13:03:50.417026 systemd[1]: Reloading... Dec 16 13:03:50.488191 zram_generator::config[2786]: No configuration found. Dec 16 13:03:50.672028 systemd[1]: Reloading finished in 254 ms. Dec 16 13:03:50.690682 waagent[2729]: 2025-12-16T13:03:50.690518Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:03:50.690800 waagent[2729]: 2025-12-16T13:03:50.690692Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:03:50.848194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:03:51.429034 waagent[2729]: 2025-12-16T13:03:51.428937Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:03:51.429469 waagent[2729]: 2025-12-16T13:03:51.429402Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:03:51.430281 waagent[2729]: 2025-12-16T13:03:51.430237Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:03:51.430472 waagent[2729]: 2025-12-16T13:03:51.430441Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:51.430527 waagent[2729]: 2025-12-16T13:03:51.430509Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:51.430829 waagent[2729]: 2025-12-16T13:03:51.430807Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:03:51.431083 waagent[2729]: 2025-12-16T13:03:51.430920Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:03:51.432255 waagent[2729]: 2025-12-16T13:03:51.431119Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:03:51.432395 waagent[2729]: 2025-12-16T13:03:51.432347Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:03:51.432702 waagent[2729]: 2025-12-16T13:03:51.432675Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:03:51.432702 waagent[2729]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:03:51.432702 waagent[2729]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:03:51.432702 waagent[2729]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:03:51.432702 waagent[2729]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:51.432702 waagent[2729]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:51.432702 waagent[2729]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:51.432912 waagent[2729]: 2025-12-16T13:03:51.432888Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:03:51.432955 waagent[2729]: 2025-12-16T13:03:51.432935Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:03:51.433083 waagent[2729]: 2025-12-16T13:03:51.433052Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:51.433552 waagent[2729]: 2025-12-16T13:03:51.433531Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:51.433783 waagent[2729]: 2025-12-16T13:03:51.433730Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:03:51.434551 waagent[2729]: 2025-12-16T13:03:51.434523Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:03:51.434611 waagent[2729]: 2025-12-16T13:03:51.434574Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:03:51.434611 waagent[2729]: 2025-12-16T13:03:51.434604Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:03:51.443898 waagent[2729]: 2025-12-16T13:03:51.443860Z INFO ExtHandler ExtHandler Dec 16 13:03:51.443961 waagent[2729]: 2025-12-16T13:03:51.443921Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a57c0699-db70-42f0-8e2e-359f893859df correlation b5cd953b-e34c-4365-b8cd-62f3dcf96399 created: 2025-12-16T13:02:44.895363Z] Dec 16 13:03:51.444251 waagent[2729]: 2025-12-16T13:03:51.444224Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:03:51.444663 waagent[2729]: 2025-12-16T13:03:51.444635Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:03:51.566834 waagent[2729]: 2025-12-16T13:03:51.566256Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:03:51.566834 waagent[2729]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:03:51.566834 waagent[2729]: 2025-12-16T13:03:51.566737Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E8457671-AC81-458E-887C-5C8931FADC29;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:03:51.625470 waagent[2729]: 2025-12-16T13:03:51.625411Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:03:51.625470 waagent[2729]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:03:51.625470 waagent[2729]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:03:51.625470 waagent[2729]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:2c:b2 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx002248402cb2 Dec 16 13:03:51.625470 waagent[2729]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:2c:b2 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:03:51.625470 waagent[2729]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:03:51.625470 waagent[2729]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:03:51.625470 waagent[2729]: 2: eth0 inet 10.200.4.32/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:03:51.625470 waagent[2729]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:03:51.625470 waagent[2729]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:03:51.625470 waagent[2729]: 2: eth0 inet6 fe80::222:48ff:fe40:2cb2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:03:51.717782 waagent[2729]: 2025-12-16T13:03:51.717690Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:03:51.717782 waagent[2729]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:51.717782 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.717782 waagent[2729]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:51.717782 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.717782 waagent[2729]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Dec 16 13:03:51.717782 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.717782 waagent[2729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:03:51.717782 waagent[2729]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:03:51.717782 waagent[2729]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:03:51.720121 waagent[2729]: 2025-12-16T13:03:51.720071Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:03:51.720121 waagent[2729]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:51.720121 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.720121 waagent[2729]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:51.720121 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.720121 waagent[2729]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Dec 16 13:03:51.720121 waagent[2729]: pkts bytes target prot opt in out source destination Dec 16 13:03:51.720121 waagent[2729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:03:51.720121 waagent[2729]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:03:51.720121 waagent[2729]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:03:58.365579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:58.367290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:01.182379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:04:01.183641 systemd[1]: Started sshd@0-10.200.4.32:22-10.200.16.10:32830.service - OpenSSH per-connection server daemon (10.200.16.10:32830). Dec 16 13:04:02.614323 sshd[2880]: Accepted publickey for core from 10.200.16.10 port 32830 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:02.615688 sshd-session[2880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:02.620739 systemd-logind[2509]: New session 3 of user core. Dec 16 13:04:02.627369 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:04:03.017487 systemd[1]: Started sshd@1-10.200.4.32:22-10.200.16.10:32842.service - OpenSSH per-connection server daemon (10.200.16.10:32842). Dec 16 13:04:03.530853 sshd[2886]: Accepted publickey for core from 10.200.16.10 port 32842 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:03.532216 sshd-session[2886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:03.537287 systemd-logind[2509]: New session 4 of user core. Dec 16 13:04:03.546353 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:04:03.816146 sshd[2889]: Connection closed by 10.200.16.10 port 32842 Dec 16 13:04:03.816912 sshd-session[2886]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:03.819799 systemd[1]: sshd@1-10.200.4.32:22-10.200.16.10:32842.service: Deactivated successfully. Dec 16 13:04:03.821612 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:04:03.823437 systemd-logind[2509]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:04:03.824081 systemd-logind[2509]: Removed session 4. Dec 16 13:04:03.924931 systemd[1]: Started sshd@2-10.200.4.32:22-10.200.16.10:32848.service - OpenSSH per-connection server daemon (10.200.16.10:32848). Dec 16 13:04:04.433811 sshd[2895]: Accepted publickey for core from 10.200.16.10 port 32848 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:04.435068 sshd-session[2895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:04.439626 systemd-logind[2509]: New session 5 of user core. Dec 16 13:04:04.445342 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:04:04.715849 sshd[2898]: Connection closed by 10.200.16.10 port 32848 Dec 16 13:04:04.716623 sshd-session[2895]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:04.720401 systemd[1]: sshd@2-10.200.4.32:22-10.200.16.10:32848.service: Deactivated successfully. Dec 16 13:04:04.721985 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:04:04.722715 systemd-logind[2509]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:04:04.723967 systemd-logind[2509]: Removed session 5. Dec 16 13:04:04.846313 systemd[1]: Started sshd@3-10.200.4.32:22-10.200.16.10:32856.service - OpenSSH per-connection server daemon (10.200.16.10:32856). Dec 16 13:04:05.355318 sshd[2904]: Accepted publickey for core from 10.200.16.10 port 32856 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:05.847428 sshd-session[2904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:05.855974 systemd-logind[2509]: New session 6 of user core. Dec 16 13:04:05.857396 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:04:05.878706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:05.884483 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:05.922527 kubelet[2913]: E1216 13:04:05.922484 2913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:05.925345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:05.925513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:05.925897 systemd[1]: kubelet.service: Consumed 152ms CPU time, 111M memory peak. Dec 16 13:04:06.056958 sshd[2909]: Connection closed by 10.200.16.10 port 32856 Dec 16 13:04:06.057396 sshd-session[2904]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:06.060370 systemd[1]: sshd@3-10.200.4.32:22-10.200.16.10:32856.service: Deactivated successfully. Dec 16 13:04:06.061897 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:04:06.062930 systemd-logind[2509]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:04:06.064066 systemd-logind[2509]: Removed session 6. Dec 16 13:04:06.168863 systemd[1]: Started sshd@4-10.200.4.32:22-10.200.16.10:32860.service - OpenSSH per-connection server daemon (10.200.16.10:32860). Dec 16 13:04:06.676011 sshd[2925]: Accepted publickey for core from 10.200.16.10 port 32860 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:06.677322 sshd-session[2925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:06.682194 systemd-logind[2509]: New session 7 of user core. Dec 16 13:04:06.693327 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:04:07.083433 sudo[2929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:04:07.083653 sudo[2929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:07.111273 sudo[2929]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:07.204751 sshd[2928]: Connection closed by 10.200.16.10 port 32860 Dec 16 13:04:07.205560 sshd-session[2925]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:07.209645 systemd[1]: sshd@4-10.200.4.32:22-10.200.16.10:32860.service: Deactivated successfully. Dec 16 13:04:07.211367 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:04:07.212055 systemd-logind[2509]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:04:07.213579 systemd-logind[2509]: Removed session 7. Dec 16 13:04:07.308381 systemd[1]: Started sshd@5-10.200.4.32:22-10.200.16.10:32872.service - OpenSSH per-connection server daemon (10.200.16.10:32872). Dec 16 13:04:07.818122 sshd[2935]: Accepted publickey for core from 10.200.16.10 port 32872 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:07.819468 sshd-session[2935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.824745 systemd-logind[2509]: New session 8 of user core. Dec 16 13:04:07.834315 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:04:08.010680 sudo[2940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:04:08.010921 sudo[2940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.027442 sudo[2940]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:08.032464 sudo[2939]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:04:08.032658 sudo[2939]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.041198 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:08.074000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:08.076561 kernel: kauditd_printk_skb: 87 callbacks suppressed Dec 16 13:04:08.076606 kernel: audit: type=1305 audit(1765890248.074:259): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:08.076625 augenrules[2962]: No rules Dec 16 13:04:08.074000 audit[2962]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd65afa10 a2=420 a3=0 items=0 ppid=2943 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:08.079763 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:08.080001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:08.083734 kernel: audit: type=1300 audit(1765890248.074:259): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd65afa10 a2=420 a3=0 items=0 ppid=2943 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:08.083801 kernel: audit: type=1327 audit(1765890248.074:259): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:08.074000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:08.081383 sudo[2939]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:08.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.088954 kernel: audit: type=1130 audit(1765890248.077:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.092062 kernel: audit: type=1131 audit(1765890248.077:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.077000 audit[2939]: USER_END pid=2939 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.094832 kernel: audit: type=1106 audit(1765890248.077:262): pid=2939 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.077000 audit[2939]: CRED_DISP pid=2939 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.095833 kernel: audit: type=1104 audit(1765890248.077:263): pid=2939 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.175886 sshd[2938]: Connection closed by 10.200.16.10 port 32872 Dec 16 13:04:08.176395 sshd-session[2935]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:08.176000 audit[2935]: USER_END pid=2935 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.179615 systemd[1]: sshd@5-10.200.4.32:22-10.200.16.10:32872.service: Deactivated successfully. Dec 16 13:04:08.182049 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:04:08.176000 audit[2935]: CRED_DISP pid=2935 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.184964 systemd-logind[2509]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:04:08.185918 systemd-logind[2509]: Removed session 8. Dec 16 13:04:08.188888 kernel: audit: type=1106 audit(1765890248.176:264): pid=2935 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.188938 kernel: audit: type=1104 audit(1765890248.176:265): pid=2935 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.32:22-10.200.16.10:32872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.192419 kernel: audit: type=1131 audit(1765890248.176:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.32:22-10.200.16.10:32872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.280971 systemd[1]: Started sshd@6-10.200.4.32:22-10.200.16.10:32882.service - OpenSSH per-connection server daemon (10.200.16.10:32882). Dec 16 13:04:08.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.32:22-10.200.16.10:32882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.783000 audit[2971]: USER_ACCT pid=2971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.784760 sshd[2971]: Accepted publickey for core from 10.200.16.10 port 32882 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:08.784000 audit[2971]: CRED_ACQ pid=2971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.784000 audit[2971]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd93c7e090 a2=3 a3=0 items=0 ppid=1 pid=2971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:08.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:04:08.786046 sshd-session[2971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:08.791218 systemd-logind[2509]: New session 9 of user core. Dec 16 13:04:08.798372 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:04:08.799000 audit[2971]: USER_START pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.801000 audit[2974]: CRED_ACQ pid=2974 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.976000 audit[2975]: USER_ACCT pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.978137 sudo[2975]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:04:08.977000 audit[2975]: CRED_REFR pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.978393 sudo[2975]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.979000 audit[2975]: USER_START pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:09.952359 chronyd[2486]: Selected source PHC0 Dec 16 13:04:11.126918 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:04:11.142409 (dockerd)[2992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:04:12.385196 dockerd[2992]: time="2025-12-16T13:04:12.384733609Z" level=info msg="Starting up" Dec 16 13:04:12.385772 dockerd[2992]: time="2025-12-16T13:04:12.385640291Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:04:12.396316 dockerd[2992]: time="2025-12-16T13:04:12.396272140Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:04:12.430249 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport140342705-merged.mount: Deactivated successfully. Dec 16 13:04:12.608901 dockerd[2992]: time="2025-12-16T13:04:12.608853208Z" level=info msg="Loading containers: start." Dec 16 13:04:12.654194 kernel: Initializing XFRM netlink socket Dec 16 13:04:12.694000 audit[3038]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.694000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcefe5a180 a2=0 a3=0 items=0 ppid=2992 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:12.695000 audit[3040]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.695000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd94ad3550 a2=0 a3=0 items=0 ppid=2992 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:12.697000 audit[3042]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.697000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc97461e20 a2=0 a3=0 items=0 ppid=2992 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:12.699000 audit[3044]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.699000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe816c0e60 a2=0 a3=0 items=0 ppid=2992 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:12.700000 audit[3046]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.700000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5da210a0 a2=0 a3=0 items=0 ppid=2992 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:12.702000 audit[3048]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.702000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd4bda6d50 a2=0 a3=0 items=0 ppid=2992 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:12.705000 audit[3050]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.705000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe4b798ea0 a2=0 a3=0 items=0 ppid=2992 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:12.707000 audit[3052]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.707000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc010cdbc0 a2=0 a3=0 items=0 ppid=2992 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:12.738000 audit[3055]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.738000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffcb3e2e560 a2=0 a3=0 items=0 ppid=2992 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:04:12.740000 audit[3057]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.740000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee5764ac0 a2=0 a3=0 items=0 ppid=2992 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:12.742000 audit[3059]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.742000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcf2e45120 a2=0 a3=0 items=0 ppid=2992 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:12.744000 audit[3061]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.744000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe502e7f50 a2=0 a3=0 items=0 ppid=2992 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:12.746000 audit[3063]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.746000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff514bd6a0 a2=0 a3=0 items=0 ppid=2992 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:12.840000 audit[3093]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.840000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc08ac70a0 a2=0 a3=0 items=0 ppid=2992 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:12.842000 audit[3095]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.842000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc8a6b65b0 a2=0 a3=0 items=0 ppid=2992 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:12.843000 audit[3097]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.843000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc37d97330 a2=0 a3=0 items=0 ppid=2992 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:12.845000 audit[3099]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.845000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcda13c700 a2=0 a3=0 items=0 ppid=2992 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:12.847000 audit[3101]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.847000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbf530b70 a2=0 a3=0 items=0 ppid=2992 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:12.848000 audit[3103]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.848000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3fb71030 a2=0 a3=0 items=0 ppid=2992 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.848000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:12.850000 audit[3105]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.850000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc93c57570 a2=0 a3=0 items=0 ppid=2992 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:12.852000 audit[3107]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.852000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff045a5d20 a2=0 a3=0 items=0 ppid=2992 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:12.853000 audit[3109]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.853000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc2bb02eb0 a2=0 a3=0 items=0 ppid=2992 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:04:12.855000 audit[3111]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.855000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4817bc60 a2=0 a3=0 items=0 ppid=2992 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:12.856000 audit[3113]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.856000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd0f947da0 a2=0 a3=0 items=0 ppid=2992 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:12.858000 audit[3115]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.858000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeb2a80a90 a2=0 a3=0 items=0 ppid=2992 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:12.860000 audit[3117]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.860000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe4108d3f0 a2=0 a3=0 items=0 ppid=2992 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:12.864000 audit[3122]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.864000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc1185f10 a2=0 a3=0 items=0 ppid=2992 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.864000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:12.865000 audit[3124]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.865000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff83b6abf0 a2=0 a3=0 items=0 ppid=2992 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:12.867000 audit[3126]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.867000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff48f5e7a0 a2=0 a3=0 items=0 ppid=2992 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.867000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:12.869000 audit[3128]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.869000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffcb9ea890 a2=0 a3=0 items=0 ppid=2992 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:12.870000 audit[3130]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.870000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcc21ced60 a2=0 a3=0 items=0 ppid=2992 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:12.872000 audit[3132]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:12.872000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffb6750270 a2=0 a3=0 items=0 ppid=2992 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:12.918000 audit[3137]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.918000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcf18f2010 a2=0 a3=0 items=0 ppid=2992 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:04:12.920000 audit[3139]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.920000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff65e3a090 a2=0 a3=0 items=0 ppid=2992 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:04:12.927000 audit[3147]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.927000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffff39acee0 a2=0 a3=0 items=0 ppid=2992 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:04:12.932000 audit[3152]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.932000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff82d25dd0 a2=0 a3=0 items=0 ppid=2992 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:12.934000 audit[3154]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.934000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe1e2ff920 a2=0 a3=0 items=0 ppid=2992 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:04:12.935000 audit[3156]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.935000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff1602eac0 a2=0 a3=0 items=0 ppid=2992 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:04:12.937000 audit[3158]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.937000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc3954f4e0 a2=0 a3=0 items=0 ppid=2992 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:12.938000 audit[3160]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:12.938000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce3a30310 a2=0 a3=0 items=0 ppid=2992 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:12.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:12.940317 systemd-networkd[2157]: docker0: Link UP Dec 16 13:04:12.958390 dockerd[2992]: time="2025-12-16T13:04:12.957665959Z" level=info msg="Loading containers: done." Dec 16 13:04:13.046264 dockerd[2992]: time="2025-12-16T13:04:13.046204702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:04:13.046422 dockerd[2992]: time="2025-12-16T13:04:13.046327161Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:04:13.046422 dockerd[2992]: time="2025-12-16T13:04:13.046412173Z" level=info msg="Initializing buildkit" Dec 16 13:04:13.094205 dockerd[2992]: time="2025-12-16T13:04:13.094140289Z" level=info msg="Completed buildkit initialization" Dec 16 13:04:13.102167 dockerd[2992]: time="2025-12-16T13:04:13.102128143Z" level=info msg="Daemon has completed initialization" Dec 16 13:04:13.102522 dockerd[2992]: time="2025-12-16T13:04:13.102284085Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:04:13.102545 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:04:13.109187 kernel: kauditd_printk_skb: 131 callbacks suppressed Dec 16 13:04:13.109273 kernel: audit: type=1130 audit(1765890253.102:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:13.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:13.427290 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2296738275-merged.mount: Deactivated successfully. Dec 16 13:04:14.102276 containerd[2529]: time="2025-12-16T13:04:14.102220836Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:04:15.092866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938309191.mount: Deactivated successfully. Dec 16 13:04:16.115590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:04:16.118576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:16.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.671277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:16.676270 kernel: audit: type=1130 audit(1765890256.670:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:16.685335 (kubelet)[3266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:16.702176 containerd[2529]: time="2025-12-16T13:04:16.701556186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:16.714742 containerd[2529]: time="2025-12-16T13:04:16.714708147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=28235253" Dec 16 13:04:16.718190 containerd[2529]: time="2025-12-16T13:04:16.718141169Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:16.724911 containerd[2529]: time="2025-12-16T13:04:16.724860104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:16.725530 containerd[2529]: time="2025-12-16T13:04:16.725508714Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.623238134s" Dec 16 13:04:16.725914 containerd[2529]: time="2025-12-16T13:04:16.725899272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:04:16.727140 containerd[2529]: time="2025-12-16T13:04:16.726990810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:04:16.728144 kubelet[3266]: E1216 13:04:16.728081 3266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:16.729675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:16.729815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:16.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:16.730212 systemd[1]: kubelet.service: Consumed 157ms CPU time, 110.4M memory peak. Dec 16 13:04:16.733180 kernel: audit: type=1131 audit(1765890256.729:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:18.338996 containerd[2529]: time="2025-12-16T13:04:18.338922744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.344574 containerd[2529]: time="2025-12-16T13:04:18.344411681Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 16 13:04:18.350121 containerd[2529]: time="2025-12-16T13:04:18.350097245Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.362891 containerd[2529]: time="2025-12-16T13:04:18.362857406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.363742 containerd[2529]: time="2025-12-16T13:04:18.363532432Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.636281007s" Dec 16 13:04:18.363742 containerd[2529]: time="2025-12-16T13:04:18.363565176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:04:18.364332 containerd[2529]: time="2025-12-16T13:04:18.364308856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:04:19.631781 containerd[2529]: time="2025-12-16T13:04:19.631721226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:19.634903 containerd[2529]: time="2025-12-16T13:04:19.634723267Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 13:04:19.653596 containerd[2529]: time="2025-12-16T13:04:19.653569657Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:19.666890 containerd[2529]: time="2025-12-16T13:04:19.666855316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:19.667786 containerd[2529]: time="2025-12-16T13:04:19.667600773Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.303258099s" Dec 16 13:04:19.667786 containerd[2529]: time="2025-12-16T13:04:19.667635414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:04:19.668449 containerd[2529]: time="2025-12-16T13:04:19.668412847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:04:20.395195 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:04:20.795199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660758485.mount: Deactivated successfully. Dec 16 13:04:21.250564 containerd[2529]: time="2025-12-16T13:04:21.250489356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:21.254176 containerd[2529]: time="2025-12-16T13:04:21.253971426Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 13:04:21.258074 containerd[2529]: time="2025-12-16T13:04:21.258021742Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:21.262425 containerd[2529]: time="2025-12-16T13:04:21.262379937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:21.263217 containerd[2529]: time="2025-12-16T13:04:21.262814127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.59436897s" Dec 16 13:04:21.263217 containerd[2529]: time="2025-12-16T13:04:21.262854145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:04:21.263582 containerd[2529]: time="2025-12-16T13:04:21.263563707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:04:22.353148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433820268.mount: Deactivated successfully. Dec 16 13:04:23.256266 containerd[2529]: time="2025-12-16T13:04:23.256203037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:23.261800 containerd[2529]: time="2025-12-16T13:04:23.261757846Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18308878" Dec 16 13:04:23.265492 containerd[2529]: time="2025-12-16T13:04:23.265448605Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:23.270204 containerd[2529]: time="2025-12-16T13:04:23.270142025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:23.271185 containerd[2529]: time="2025-12-16T13:04:23.270906480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.007311689s" Dec 16 13:04:23.271185 containerd[2529]: time="2025-12-16T13:04:23.270946485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:04:23.271750 containerd[2529]: time="2025-12-16T13:04:23.271721599Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:04:23.812268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499126855.mount: Deactivated successfully. Dec 16 13:04:23.841381 containerd[2529]: time="2025-12-16T13:04:23.841336117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:23.845751 containerd[2529]: time="2025-12-16T13:04:23.845592042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Dec 16 13:04:23.853420 containerd[2529]: time="2025-12-16T13:04:23.853395360Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:23.861067 containerd[2529]: time="2025-12-16T13:04:23.861035293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:23.861775 containerd[2529]: time="2025-12-16T13:04:23.861467479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 589.711886ms" Dec 16 13:04:23.861775 containerd[2529]: time="2025-12-16T13:04:23.861499382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:04:23.862114 containerd[2529]: time="2025-12-16T13:04:23.862077783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:04:24.579258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997980867.mount: Deactivated successfully. Dec 16 13:04:26.332137 containerd[2529]: time="2025-12-16T13:04:26.332067090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:26.336347 containerd[2529]: time="2025-12-16T13:04:26.336306633Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56280058" Dec 16 13:04:26.341329 containerd[2529]: time="2025-12-16T13:04:26.341285171Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:26.349899 containerd[2529]: time="2025-12-16T13:04:26.349687924Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.487561791s" Dec 16 13:04:26.349899 containerd[2529]: time="2025-12-16T13:04:26.349730266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:04:26.349899 containerd[2529]: time="2025-12-16T13:04:26.349745664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:26.865557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:04:26.867480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:27.393349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:27.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:27.397174 kernel: audit: type=1130 audit(1765890267.392:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:27.409378 (kubelet)[3428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:27.461762 kubelet[3428]: E1216 13:04:27.461722 3428 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:27.464331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:27.464453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:27.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:27.468415 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.6M memory peak. Dec 16 13:04:27.469201 kernel: audit: type=1131 audit(1765890267.463:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:28.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:28.692589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:28.692777 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.6M memory peak. Dec 16 13:04:28.696595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:28.697913 kernel: audit: type=1130 audit(1765890268.691:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:28.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:28.702230 kernel: audit: type=1131 audit(1765890268.691:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:28.729443 systemd[1]: Reload requested from client PID 3442 ('systemctl') (unit session-9.scope)... Dec 16 13:04:28.729581 systemd[1]: Reloading... Dec 16 13:04:28.835190 zram_generator::config[3492]: No configuration found. Dec 16 13:04:29.040167 systemd[1]: Reloading finished in 310 ms. Dec 16 13:04:29.353819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:04:29.353931 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:04:29.354306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:29.354381 systemd[1]: kubelet.service: Consumed 86ms CPU time, 78M memory peak. Dec 16 13:04:29.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:29.359183 kernel: audit: type=1130 audit(1765890269.353:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:29.360094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:29.364859 kernel: audit: type=1334 audit(1765890269.360:324): prog-id=87 op=LOAD Dec 16 13:04:29.364933 kernel: audit: type=1334 audit(1765890269.360:325): prog-id=67 op=UNLOAD Dec 16 13:04:29.360000 audit: BPF prog-id=87 op=LOAD Dec 16 13:04:29.360000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:04:29.368497 kernel: audit: type=1334 audit(1765890269.362:326): prog-id=88 op=LOAD Dec 16 13:04:29.368556 kernel: audit: type=1334 audit(1765890269.362:327): prog-id=80 op=UNLOAD Dec 16 13:04:29.362000 audit: BPF prog-id=88 op=LOAD Dec 16 13:04:29.362000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:04:29.362000 audit: BPF prog-id=89 op=LOAD Dec 16 13:04:29.362000 audit: BPF prog-id=90 op=LOAD Dec 16 13:04:29.362000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:04:29.362000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:04:29.363000 audit: BPF prog-id=91 op=LOAD Dec 16 13:04:29.363000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:04:29.363000 audit: BPF prog-id=92 op=LOAD Dec 16 13:04:29.363000 audit: BPF prog-id=93 op=LOAD Dec 16 13:04:29.363000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:04:29.363000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:04:29.364000 audit: BPF prog-id=94 op=LOAD Dec 16 13:04:29.364000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:04:29.364000 audit: BPF prog-id=95 op=LOAD Dec 16 13:04:29.364000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:04:29.365000 audit: BPF prog-id=96 op=LOAD Dec 16 13:04:29.365000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:04:29.365000 audit: BPF prog-id=97 op=LOAD Dec 16 13:04:29.365000 audit: BPF prog-id=98 op=LOAD Dec 16 13:04:29.365000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:04:29.365000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:04:29.367000 audit: BPF prog-id=99 op=LOAD Dec 16 13:04:29.367000 audit: BPF prog-id=100 op=LOAD Dec 16 13:04:29.367000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:04:29.367000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:04:29.369000 audit: BPF prog-id=101 op=LOAD Dec 16 13:04:29.372364 kernel: audit: type=1334 audit(1765890269.362:328): prog-id=89 op=LOAD Dec 16 13:04:29.376000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:04:29.376000 audit: BPF prog-id=102 op=LOAD Dec 16 13:04:29.376000 audit: BPF prog-id=103 op=LOAD Dec 16 13:04:29.376000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:04:29.376000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:04:29.376000 audit: BPF prog-id=104 op=LOAD Dec 16 13:04:29.376000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:04:29.376000 audit: BPF prog-id=105 op=LOAD Dec 16 13:04:29.376000 audit: BPF prog-id=106 op=LOAD Dec 16 13:04:29.377000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:04:29.377000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:04:31.340327 update_engine[2510]: I20251216 13:04:31.340220 2510 update_attempter.cc:509] Updating boot flags... Dec 16 13:04:35.357642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:35.362772 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 16 13:04:35.362859 kernel: audit: type=1130 audit(1765890275.358:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:35.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:35.372477 (kubelet)[3582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:04:35.412838 kubelet[3582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:35.412838 kubelet[3582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:04:35.412838 kubelet[3582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:35.413111 kubelet[3582]: I1216 13:04:35.412901 3582 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:04:35.734245 kubelet[3582]: I1216 13:04:35.733406 3582 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:04:35.734245 kubelet[3582]: I1216 13:04:35.733445 3582 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:04:35.734245 kubelet[3582]: I1216 13:04:35.733912 3582 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:04:35.769226 kubelet[3582]: E1216 13:04:35.769188 3582 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:35.771219 kubelet[3582]: I1216 13:04:35.771196 3582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:35.778483 kubelet[3582]: I1216 13:04:35.778461 3582 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:04:35.780858 kubelet[3582]: I1216 13:04:35.780840 3582 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:04:35.782181 kubelet[3582]: I1216 13:04:35.782129 3582 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:04:35.782361 kubelet[3582]: I1216 13:04:35.782180 3582 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-bc3c22631a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:04:35.782493 kubelet[3582]: I1216 13:04:35.782369 3582 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:04:35.782493 kubelet[3582]: I1216 13:04:35.782381 3582 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:04:35.782536 kubelet[3582]: I1216 13:04:35.782505 3582 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:35.785805 kubelet[3582]: I1216 13:04:35.785789 3582 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:04:35.785870 kubelet[3582]: I1216 13:04:35.785818 3582 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:04:35.785870 kubelet[3582]: I1216 13:04:35.785844 3582 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:04:35.785870 kubelet[3582]: I1216 13:04:35.785856 3582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:04:35.792368 kubelet[3582]: W1216 13:04:35.792329 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:35.792700 kubelet[3582]: E1216 13:04:35.792685 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:35.793437 kubelet[3582]: W1216 13:04:35.792796 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:35.793437 kubelet[3582]: E1216 13:04:35.792827 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:35.793437 kubelet[3582]: I1216 13:04:35.792897 3582 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:04:35.793437 kubelet[3582]: I1216 13:04:35.793295 3582 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:04:35.793842 kubelet[3582]: W1216 13:04:35.793826 3582 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:04:35.797192 kubelet[3582]: I1216 13:04:35.796242 3582 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:04:35.797192 kubelet[3582]: I1216 13:04:35.796280 3582 server.go:1287] "Started kubelet" Dec 16 13:04:35.797192 kubelet[3582]: I1216 13:04:35.796624 3582 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:04:35.798070 kubelet[3582]: I1216 13:04:35.797804 3582 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:04:35.802570 kubelet[3582]: I1216 13:04:35.802514 3582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:04:35.802804 kubelet[3582]: I1216 13:04:35.802789 3582 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:04:35.803725 kubelet[3582]: I1216 13:04:35.803712 3582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:04:35.804297 kubelet[3582]: E1216 13:04:35.802964 3582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-bc3c22631a.1881b3d9892710a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-bc3c22631a,UID:ci-4515.1.0-a-bc3c22631a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-bc3c22631a,},FirstTimestamp:2025-12-16 13:04:35.796258977 +0000 UTC m=+0.420581491,LastTimestamp:2025-12-16 13:04:35.796258977 +0000 UTC m=+0.420581491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-bc3c22631a,}" Dec 16 13:04:35.805059 kubelet[3582]: I1216 13:04:35.805023 3582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:04:35.805361 kubelet[3582]: I1216 13:04:35.805321 3582 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:04:35.805494 kubelet[3582]: E1216 13:04:35.805478 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:35.808000 audit[3594]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.813900 kubelet[3582]: I1216 13:04:35.806819 3582 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:04:35.813900 kubelet[3582]: W1216 13:04:35.806989 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:35.813900 kubelet[3582]: E1216 13:04:35.807031 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:35.813900 kubelet[3582]: I1216 13:04:35.807071 3582 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:04:35.813900 kubelet[3582]: E1216 13:04:35.807131 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="200ms" Dec 16 13:04:35.814176 kernel: audit: type=1325 audit(1765890275.808:365): table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.815204 kubelet[3582]: I1216 13:04:35.815190 3582 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:04:35.808000 audit[3594]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdbece32b0 a2=0 a3=0 items=0 ppid=3582 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.815520 kubelet[3582]: I1216 13:04:35.815428 3582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:04:35.819015 kubelet[3582]: E1216 13:04:35.819000 3582 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:04:35.819352 kubelet[3582]: I1216 13:04:35.819341 3582 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:04:35.822174 kernel: audit: type=1300 audit(1765890275.808:365): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdbece32b0 a2=0 a3=0 items=0 ppid=3582 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.822238 kernel: audit: type=1327 audit(1765890275.808:365): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:35.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:35.809000 audit[3595]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.829933 kernel: audit: type=1325 audit(1765890275.809:366): table=filter:46 family=2 entries=1 op=nft_register_chain pid=3595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.830000 kernel: audit: type=1300 audit(1765890275.809:366): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdef098130 a2=0 a3=0 items=0 ppid=3582 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.809000 audit[3595]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdef098130 a2=0 a3=0 items=0 ppid=3582 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.839660 kernel: audit: type=1327 audit(1765890275.809:366): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:04:35.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:04:35.839737 kubelet[3582]: I1216 13:04:35.839202 3582 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:04:35.839737 kubelet[3582]: I1216 13:04:35.839222 3582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:04:35.839737 kubelet[3582]: I1216 13:04:35.839239 3582 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:35.811000 audit[3597]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.841418 kernel: audit: type=1325 audit(1765890275.811:367): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.811000 audit[3597]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc61d02280 a2=0 a3=0 items=0 ppid=3582 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.844411 kernel: audit: type=1300 audit(1765890275.811:367): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc61d02280 a2=0 a3=0 items=0 ppid=3582 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:35.846876 kernel: audit: type=1327 audit(1765890275.811:367): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:35.813000 audit[3599]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:35.813000 audit[3599]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc97f52ff0 a2=0 a3=0 items=0 ppid=3582 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:35.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:35.905556 kubelet[3582]: E1216 13:04:35.905531 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.006225 kubelet[3582]: E1216 13:04:36.006091 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.007533 kubelet[3582]: E1216 13:04:36.007507 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="400ms" Dec 16 13:04:36.106997 kubelet[3582]: E1216 13:04:36.106955 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.207445 kubelet[3582]: E1216 13:04:36.207397 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.308378 kubelet[3582]: E1216 13:04:36.308250 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.408088 kubelet[3582]: E1216 13:04:36.408051 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="800ms" Dec 16 13:04:36.409205 kubelet[3582]: E1216 13:04:36.409188 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.509664 kubelet[3582]: E1216 13:04:36.509618 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.564000 audit[3606]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:36.564000 audit[3606]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff496475a0 a2=0 a3=0 items=0 ppid=3582 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 13:04:36.566000 audit[3608]: NETFILTER_CFG table=mangle:50 family=2 entries=1 op=nft_register_chain pid=3608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:36.566000 audit[3608]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffff8349e0 a2=0 a3=0 items=0 ppid=3582 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:04:36.567000 audit[3609]: NETFILTER_CFG table=mangle:51 family=10 entries=2 op=nft_register_chain pid=3609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:36.567000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff04df9500 a2=0 a3=0 items=0 ppid=3582 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:36.567000 audit[3610]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:36.567000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc549cbc40 a2=0 a3=0 items=0 ppid=3582 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:04:36.569000 audit[3612]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:36.569000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe08e6e0d0 a2=0 a3=0 items=0 ppid=3582 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:04:36.569000 audit[3611]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:36.569000 audit[3611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe696b44f0 a2=0 a3=0 items=0 ppid=3582 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:04:36.570000 audit[3613]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:36.570000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd97d86a0 a2=0 a3=0 items=0 ppid=3582 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:04:36.571000 audit[3614]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:36.571000 audit[3614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd04965f30 a2=0 a3=0 items=0 ppid=3582 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:36.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:04:36.899201 kubelet[3582]: I1216 13:04:36.565631 3582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:04:36.899201 kubelet[3582]: I1216 13:04:36.567843 3582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:04:36.899201 kubelet[3582]: I1216 13:04:36.567867 3582 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:04:36.899201 kubelet[3582]: I1216 13:04:36.567895 3582 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:04:36.899201 kubelet[3582]: I1216 13:04:36.567903 3582 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.567958 3582 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:04:36.899201 kubelet[3582]: W1216 13:04:36.568747 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.568790 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.610030 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.668540 3582 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.710885 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.899201 kubelet[3582]: E1216 13:04:36.811389 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.899426 kubelet[3582]: E1216 13:04:36.868640 3582 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:36.906255 kubelet[3582]: W1216 13:04:36.906184 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:36.906335 kubelet[3582]: E1216 13:04:36.906269 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:36.911484 kubelet[3582]: E1216 13:04:36.911468 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:36.982540 kubelet[3582]: I1216 13:04:36.982488 3582 policy_none.go:49] "None policy: Start" Dec 16 13:04:36.982540 kubelet[3582]: I1216 13:04:36.982549 3582 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:04:36.982741 kubelet[3582]: I1216 13:04:36.982564 3582 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:04:37.011980 kubelet[3582]: E1216 13:04:37.011960 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.034792 kubelet[3582]: W1216 13:04:37.034751 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:37.034884 kubelet[3582]: E1216 13:04:37.034804 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:37.090503 kubelet[3582]: E1216 13:04:37.090407 3582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-bc3c22631a.1881b3d9892710a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-bc3c22631a,UID:ci-4515.1.0-a-bc3c22631a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-bc3c22631a,},FirstTimestamp:2025-12-16 13:04:35.796258977 +0000 UTC m=+0.420581491,LastTimestamp:2025-12-16 13:04:35.796258977 +0000 UTC m=+0.420581491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-bc3c22631a,}" Dec 16 13:04:37.112645 kubelet[3582]: E1216 13:04:37.112626 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.209669 kubelet[3582]: E1216 13:04:37.209545 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="1.6s" Dec 16 13:04:37.213758 kubelet[3582]: E1216 13:04:37.213737 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.269008 kubelet[3582]: E1216 13:04:37.268978 3582 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.314476 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.789445 kubelet[3582]: W1216 13:04:37.332134 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.332217 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.414592 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.515055 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.615790 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.789445 kubelet[3582]: E1216 13:04:37.716263 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.805964 kubelet[3582]: W1216 13:04:37.805932 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:37.806058 kubelet[3582]: E1216 13:04:37.805991 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:37.817123 kubelet[3582]: E1216 13:04:37.817106 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.839567 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:04:37.840792 kubelet[3582]: E1216 13:04:37.840462 3582 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:37.851942 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:04:37.855199 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:04:37.874759 kubelet[3582]: I1216 13:04:37.874741 3582 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:04:37.875096 kubelet[3582]: I1216 13:04:37.875040 3582 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:04:37.875231 kubelet[3582]: I1216 13:04:37.875170 3582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:04:37.876359 kubelet[3582]: I1216 13:04:37.876204 3582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:04:37.877613 kubelet[3582]: E1216 13:04:37.877593 3582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:04:37.877685 kubelet[3582]: E1216 13:04:37.877640 3582 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:37.978531 kubelet[3582]: I1216 13:04:37.978480 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:37.978910 kubelet[3582]: E1216 13:04:37.978881 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.079032 systemd[1]: Created slice kubepods-burstable-podf032cd5149ec1f5f6b9d25e9ae9e40e8.slice - libcontainer container kubepods-burstable-podf032cd5149ec1f5f6b9d25e9ae9e40e8.slice. Dec 16 13:04:38.088028 kubelet[3582]: E1216 13:04:38.087903 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.091054 systemd[1]: Created slice kubepods-burstable-podb09e9e7f34ccccbcc607401fa72a4e3b.slice - libcontainer container kubepods-burstable-podb09e9e7f34ccccbcc607401fa72a4e3b.slice. Dec 16 13:04:38.102149 kubelet[3582]: E1216 13:04:38.102131 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.104448 systemd[1]: Created slice kubepods-burstable-pod75871316865ba88fe69691642a612d6a.slice - libcontainer container kubepods-burstable-pod75871316865ba88fe69691642a612d6a.slice. Dec 16 13:04:38.106326 kubelet[3582]: E1216 13:04:38.106306 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119435 kubelet[3582]: I1216 13:04:38.119359 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119491 kubelet[3582]: I1216 13:04:38.119445 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119491 kubelet[3582]: I1216 13:04:38.119471 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119551 kubelet[3582]: I1216 13:04:38.119492 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119551 kubelet[3582]: I1216 13:04:38.119527 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75871316865ba88fe69691642a612d6a-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-bc3c22631a\" (UID: \"75871316865ba88fe69691642a612d6a\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119551 kubelet[3582]: I1216 13:04:38.119546 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119617 kubelet[3582]: I1216 13:04:38.119564 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119617 kubelet[3582]: I1216 13:04:38.119581 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.119617 kubelet[3582]: I1216 13:04:38.119601 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.180747 kubelet[3582]: I1216 13:04:38.180731 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.181133 kubelet[3582]: E1216 13:04:38.181112 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.389430 containerd[2529]: time="2025-12-16T13:04:38.389385558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-bc3c22631a,Uid:f032cd5149ec1f5f6b9d25e9ae9e40e8,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:38.403825 containerd[2529]: time="2025-12-16T13:04:38.403791073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-bc3c22631a,Uid:b09e9e7f34ccccbcc607401fa72a4e3b,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:38.407633 containerd[2529]: time="2025-12-16T13:04:38.407600460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-bc3c22631a,Uid:75871316865ba88fe69691642a612d6a,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:38.582992 kubelet[3582]: I1216 13:04:38.582962 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.583402 kubelet[3582]: E1216 13:04:38.583379 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:38.611906 kubelet[3582]: W1216 13:04:38.611869 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:38.611978 kubelet[3582]: E1216 13:04:38.611924 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:38.810164 kubelet[3582]: E1216 13:04:38.810036 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="3.2s" Dec 16 13:04:39.385940 kubelet[3582]: I1216 13:04:39.385899 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:39.397805 kubelet[3582]: E1216 13:04:39.386825 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:39.622344 kubelet[3582]: W1216 13:04:39.622263 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:39.622344 kubelet[3582]: E1216 13:04:39.622348 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:40.170393 kubelet[3582]: W1216 13:04:40.170310 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:40.170860 kubelet[3582]: E1216 13:04:40.170402 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:40.747536 kubelet[3582]: W1216 13:04:40.747461 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:40.747536 kubelet[3582]: E1216 13:04:40.747541 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:40.989424 kubelet[3582]: I1216 13:04:40.989385 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:40.989789 kubelet[3582]: E1216 13:04:40.989769 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:41.866735 kubelet[3582]: E1216 13:04:41.866685 3582 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:42.010932 kubelet[3582]: E1216 13:04:42.010886 3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-bc3c22631a?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="6.4s" Dec 16 13:04:42.546633 kubelet[3582]: W1216 13:04:42.546549 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:42.546633 kubelet[3582]: E1216 13:04:42.546645 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-bc3c22631a&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:44.192897 kubelet[3582]: I1216 13:04:44.192856 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:44.193493 kubelet[3582]: E1216 13:04:44.193300 3582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:44.304495 kubelet[3582]: W1216 13:04:44.304463 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:44.334692 kubelet[3582]: E1216 13:04:44.304508 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:44.850373 containerd[2529]: time="2025-12-16T13:04:44.850314438Z" level=info msg="connecting to shim 057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9" address="unix:///run/containerd/s/595e4baa0c80b7055cd20ac8e4ffc45d68e088bbc30b4f9f190bc203365f2ede" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:44.882388 systemd[1]: Started cri-containerd-057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9.scope - libcontainer container 057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9. Dec 16 13:04:44.899195 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 13:04:44.899302 kernel: audit: type=1334 audit(1765890284.895:377): prog-id=107 op=LOAD Dec 16 13:04:44.895000 audit: BPF prog-id=107 op=LOAD Dec 16 13:04:44.898000 audit: BPF prog-id=108 op=LOAD Dec 16 13:04:44.900529 kernel: audit: type=1334 audit(1765890284.898:378): prog-id=108 op=LOAD Dec 16 13:04:44.898000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.910908 kernel: audit: type=1300 audit(1765890284.898:378): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.910971 kernel: audit: type=1327 audit(1765890284.898:378): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.913110 containerd[2529]: time="2025-12-16T13:04:44.913066896Z" level=info msg="connecting to shim a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4" address="unix:///run/containerd/s/61234c180e1d2ac31cbe793bc8627a53d731c5026e68b0da07bbbc5791d69a2b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:44.899000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:04:44.899000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.922320 kernel: audit: type=1334 audit(1765890284.899:379): prog-id=108 op=UNLOAD Dec 16 13:04:44.922375 kernel: audit: type=1300 audit(1765890284.899:379): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.930030 kernel: audit: type=1327 audit(1765890284.899:379): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.899000 audit: BPF prog-id=109 op=LOAD Dec 16 13:04:44.899000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.943866 kernel: audit: type=1334 audit(1765890284.899:380): prog-id=109 op=LOAD Dec 16 13:04:44.943923 kernel: audit: type=1300 audit(1765890284.899:380): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.960245 kernel: audit: type=1327 audit(1765890284.899:380): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.899000 audit: BPF prog-id=110 op=LOAD Dec 16 13:04:44.899000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.900000 audit: BPF prog-id=110 op=UNLOAD Dec 16 13:04:44.900000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.900000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:04:44.900000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.900000 audit: BPF prog-id=111 op=LOAD Dec 16 13:04:44.900000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3624 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376164666538616162383061336439646266653138383332306137 Dec 16 13:04:44.968494 systemd[1]: Started cri-containerd-a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4.scope - libcontainer container a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4. Dec 16 13:04:44.986303 containerd[2529]: time="2025-12-16T13:04:44.986265131Z" level=info msg="connecting to shim 9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38" address="unix:///run/containerd/s/7bb2e618b75eb95be0349fb1201002584409975a21d2265f795a7f109d3d890b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:44.987000 audit: BPF prog-id=112 op=LOAD Dec 16 13:04:44.988000 audit: BPF prog-id=113 op=LOAD Dec 16 13:04:44.988000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.988000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:04:44.988000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.988000 audit: BPF prog-id=114 op=LOAD Dec 16 13:04:44.988000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.989000 audit: BPF prog-id=115 op=LOAD Dec 16 13:04:44.989000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.989000 audit: BPF prog-id=115 op=UNLOAD Dec 16 13:04:44.989000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.990000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:04:44.990000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:44.990000 audit: BPF prog-id=116 op=LOAD Dec 16 13:04:44.990000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:44.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623234366331346538333938303439346430373964303264636663 Dec 16 13:04:45.013237 systemd[1]: Started cri-containerd-9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38.scope - libcontainer container 9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38. Dec 16 13:04:45.032000 audit: BPF prog-id=117 op=LOAD Dec 16 13:04:45.032000 audit: BPF prog-id=118 op=LOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.032000 audit: BPF prog-id=118 op=UNLOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.032000 audit: BPF prog-id=119 op=LOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.032000 audit: BPF prog-id=120 op=LOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.032000 audit: BPF prog-id=120 op=UNLOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.032000 audit: BPF prog-id=119 op=UNLOAD Dec 16 13:04:45.032000 audit[3718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.033000 audit: BPF prog-id=121 op=LOAD Dec 16 13:04:45.033000 audit[3718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3707 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:45.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333535353865646536393263343133643465303265646435323136 Dec 16 13:04:45.295920 containerd[2529]: time="2025-12-16T13:04:45.295873058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-bc3c22631a,Uid:f032cd5149ec1f5f6b9d25e9ae9e40e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9\"" Dec 16 13:04:45.344413 containerd[2529]: time="2025-12-16T13:04:45.344365990Z" level=info msg="CreateContainer within sandbox \"057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:04:45.347247 containerd[2529]: time="2025-12-16T13:04:45.347215717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-bc3c22631a,Uid:b09e9e7f34ccccbcc607401fa72a4e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4\"" Dec 16 13:04:45.349133 containerd[2529]: time="2025-12-16T13:04:45.349104339Z" level=info msg="CreateContainer within sandbox \"a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:04:45.390546 containerd[2529]: time="2025-12-16T13:04:45.390509948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-bc3c22631a,Uid:75871316865ba88fe69691642a612d6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38\"" Dec 16 13:04:45.392955 containerd[2529]: time="2025-12-16T13:04:45.392917319Z" level=info msg="CreateContainer within sandbox \"9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:04:45.828663 kubelet[3582]: W1216 13:04:45.828613 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:45.828663 kubelet[3582]: E1216 13:04:45.828672 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:45.851180 containerd[2529]: time="2025-12-16T13:04:45.850632672Z" level=info msg="Container 2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:45.852493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248432819.mount: Deactivated successfully. Dec 16 13:04:45.989919 kubelet[3582]: W1216 13:04:45.989870 3582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Dec 16 13:04:45.989919 kubelet[3582]: E1216 13:04:45.989924 3582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:04:46.036671 containerd[2529]: time="2025-12-16T13:04:46.035750003Z" level=info msg="Container 29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:46.197460 containerd[2529]: time="2025-12-16T13:04:46.197408763Z" level=info msg="Container 07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:46.638523 containerd[2529]: time="2025-12-16T13:04:46.638208321Z" level=info msg="CreateContainer within sandbox \"057adfe8aab80a3d9dbfe188320a7064bc594b27d536b82e8855ccb74894b9c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777\"" Dec 16 13:04:46.639685 containerd[2529]: time="2025-12-16T13:04:46.639653011Z" level=info msg="StartContainer for \"2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777\"" Dec 16 13:04:46.641456 containerd[2529]: time="2025-12-16T13:04:46.641421473Z" level=info msg="connecting to shim 2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777" address="unix:///run/containerd/s/595e4baa0c80b7055cd20ac8e4ffc45d68e088bbc30b4f9f190bc203365f2ede" protocol=ttrpc version=3 Dec 16 13:04:46.677388 systemd[1]: Started cri-containerd-2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777.scope - libcontainer container 2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777. Dec 16 13:04:46.845861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341850054.mount: Deactivated successfully. Dec 16 13:04:46.847000 audit: BPF prog-id=122 op=LOAD Dec 16 13:04:46.847000 audit: BPF prog-id=123 op=LOAD Dec 16 13:04:46.847000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.847000 audit: BPF prog-id=123 op=UNLOAD Dec 16 13:04:46.847000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.848000 audit: BPF prog-id=124 op=LOAD Dec 16 13:04:46.848000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.848000 audit: BPF prog-id=125 op=LOAD Dec 16 13:04:46.848000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.848000 audit: BPF prog-id=125 op=UNLOAD Dec 16 13:04:46.848000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.848000 audit: BPF prog-id=124 op=UNLOAD Dec 16 13:04:46.848000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:46.848000 audit: BPF prog-id=126 op=LOAD Dec 16 13:04:46.848000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3624 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:46.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239353138303264366263663130386562613066613839623761393964 Dec 16 13:04:47.296025 containerd[2529]: time="2025-12-16T13:04:47.295830457Z" level=info msg="CreateContainer within sandbox \"9335558ede692c413d4e02edd52168291fc4baa389e969de0f83b788bacd4d38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500\"" Dec 16 13:04:47.297274 containerd[2529]: time="2025-12-16T13:04:47.295929974Z" level=info msg="CreateContainer within sandbox \"a7b246c14e83980494d079d02dcfcda0a37b58aa6c34804202a2a3aa3ddd1fc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff\"" Dec 16 13:04:47.297274 containerd[2529]: time="2025-12-16T13:04:47.296789675Z" level=info msg="StartContainer for \"2951802d6bcf108eba0fa89b7a99dbe1a6b68d316376d9ac8522b1803f7b1777\" returns successfully" Dec 16 13:04:47.297886 containerd[2529]: time="2025-12-16T13:04:47.297730998Z" level=info msg="StartContainer for \"29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500\"" Dec 16 13:04:47.299172 containerd[2529]: time="2025-12-16T13:04:47.298090481Z" level=info msg="StartContainer for \"07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff\"" Dec 16 13:04:47.300119 containerd[2529]: time="2025-12-16T13:04:47.300096136Z" level=info msg="connecting to shim 29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500" address="unix:///run/containerd/s/7bb2e618b75eb95be0349fb1201002584409975a21d2265f795a7f109d3d890b" protocol=ttrpc version=3 Dec 16 13:04:47.300585 containerd[2529]: time="2025-12-16T13:04:47.300463519Z" level=info msg="connecting to shim 07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff" address="unix:///run/containerd/s/61234c180e1d2ac31cbe793bc8627a53d731c5026e68b0da07bbbc5791d69a2b" protocol=ttrpc version=3 Dec 16 13:04:47.354467 systemd[1]: Started cri-containerd-07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff.scope - libcontainer container 07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff. Dec 16 13:04:47.356405 systemd[1]: Started cri-containerd-29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500.scope - libcontainer container 29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500. Dec 16 13:04:47.376000 audit: BPF prog-id=127 op=LOAD Dec 16 13:04:47.376000 audit: BPF prog-id=128 op=LOAD Dec 16 13:04:47.376000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.376000 audit: BPF prog-id=128 op=UNLOAD Dec 16 13:04:47.376000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.376000 audit: BPF prog-id=129 op=LOAD Dec 16 13:04:47.376000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.377000 audit: BPF prog-id=130 op=LOAD Dec 16 13:04:47.377000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.377000 audit: BPF prog-id=130 op=UNLOAD Dec 16 13:04:47.377000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.377000 audit: BPF prog-id=129 op=UNLOAD Dec 16 13:04:47.377000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.377000 audit: BPF prog-id=131 op=LOAD Dec 16 13:04:47.377000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3663 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037383936633464396566636638623736396438363338316434613538 Dec 16 13:04:47.386000 audit: BPF prog-id=132 op=LOAD Dec 16 13:04:47.387000 audit: BPF prog-id=133 op=LOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=133 op=UNLOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=134 op=LOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=135 op=LOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=135 op=UNLOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=134 op=UNLOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.387000 audit: BPF prog-id=136 op=LOAD Dec 16 13:04:47.387000 audit[3789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3707 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:47.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239616135356332396631363334313739313233666139343731373962 Dec 16 13:04:47.444784 containerd[2529]: time="2025-12-16T13:04:47.444734941Z" level=info msg="StartContainer for \"29aa55c29f1634179123fa947179b6d068ad8af6fc2dd7169201f3128f646500\" returns successfully" Dec 16 13:04:47.458290 containerd[2529]: time="2025-12-16T13:04:47.458249557Z" level=info msg="StartContainer for \"07896c4d9efcf8b769d86381d4a58c34ab52a97822d83c86e02a47ddce5c69ff\" returns successfully" Dec 16 13:04:47.594334 kubelet[3582]: E1216 13:04:47.594018 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:47.599487 kubelet[3582]: E1216 13:04:47.599357 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:47.602167 kubelet[3582]: E1216 13:04:47.600368 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:47.878608 kubelet[3582]: E1216 13:04:47.878438 3582 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:48.605384 kubelet[3582]: E1216 13:04:48.604558 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:48.606516 kubelet[3582]: E1216 13:04:48.606496 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:48.607567 kubelet[3582]: E1216 13:04:48.606138 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:49.022403 kubelet[3582]: E1216 13:04:49.022322 3582 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:49.376655 kubelet[3582]: E1216 13:04:49.376612 3582 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4515.1.0-a-bc3c22631a" not found Dec 16 13:04:49.606548 kubelet[3582]: E1216 13:04:49.606481 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:49.606978 kubelet[3582]: E1216 13:04:49.606802 3582 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:49.735358 kubelet[3582]: E1216 13:04:49.735248 3582 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4515.1.0-a-bc3c22631a" not found Dec 16 13:04:50.156996 kubelet[3582]: E1216 13:04:50.156955 3582 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4515.1.0-a-bc3c22631a" not found Dec 16 13:04:50.596663 kubelet[3582]: I1216 13:04:50.596093 3582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:50.601341 kubelet[3582]: I1216 13:04:50.601304 3582 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:50.601341 kubelet[3582]: E1216 13:04:50.601341 3582 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-bc3c22631a\": node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:50.613023 kubelet[3582]: E1216 13:04:50.612996 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:50.714059 kubelet[3582]: E1216 13:04:50.714027 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:50.814526 kubelet[3582]: E1216 13:04:50.814486 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:50.915173 kubelet[3582]: E1216 13:04:50.915131 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.015623 kubelet[3582]: E1216 13:04:51.015588 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.116565 kubelet[3582]: E1216 13:04:51.116523 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.217121 kubelet[3582]: E1216 13:04:51.217038 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.318199 kubelet[3582]: E1216 13:04:51.317946 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.326558 systemd[1]: Reload requested from client PID 3854 ('systemctl') (unit session-9.scope)... Dec 16 13:04:51.326573 systemd[1]: Reloading... Dec 16 13:04:51.418043 kubelet[3582]: E1216 13:04:51.418000 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.422197 zram_generator::config[3910]: No configuration found. Dec 16 13:04:51.519143 kubelet[3582]: E1216 13:04:51.518800 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.614440 systemd[1]: Reloading finished in 287 ms. Dec 16 13:04:51.619779 kubelet[3582]: E1216 13:04:51.619757 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:51.638058 kubelet[3582]: I1216 13:04:51.638016 3582 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:51.638344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:51.642222 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:04:51.642482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:51.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:51.643387 kernel: kauditd_printk_skb: 122 callbacks suppressed Dec 16 13:04:51.643435 kernel: audit: type=1131 audit(1765890291.641:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:51.647248 systemd[1]: kubelet.service: Consumed 808ms CPU time, 130.3M memory peak. Dec 16 13:04:51.651415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:51.651000 audit: BPF prog-id=137 op=LOAD Dec 16 13:04:51.656227 kernel: audit: type=1334 audit(1765890291.651:426): prog-id=137 op=LOAD Dec 16 13:04:51.656297 kernel: audit: type=1334 audit(1765890291.651:427): prog-id=87 op=UNLOAD Dec 16 13:04:51.651000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:04:51.656000 audit: BPF prog-id=138 op=LOAD Dec 16 13:04:51.659229 kernel: audit: type=1334 audit(1765890291.656:428): prog-id=138 op=LOAD Dec 16 13:04:51.656000 audit: BPF prog-id=91 op=UNLOAD Dec 16 13:04:51.656000 audit: BPF prog-id=139 op=LOAD Dec 16 13:04:51.661350 kernel: audit: type=1334 audit(1765890291.656:429): prog-id=91 op=UNLOAD Dec 16 13:04:51.661439 kernel: audit: type=1334 audit(1765890291.656:430): prog-id=139 op=LOAD Dec 16 13:04:51.656000 audit: BPF prog-id=140 op=LOAD Dec 16 13:04:51.663186 kernel: audit: type=1334 audit(1765890291.656:431): prog-id=140 op=LOAD Dec 16 13:04:51.656000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:04:51.656000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:04:51.665911 kernel: audit: type=1334 audit(1765890291.656:432): prog-id=92 op=UNLOAD Dec 16 13:04:51.665953 kernel: audit: type=1334 audit(1765890291.656:433): prog-id=93 op=UNLOAD Dec 16 13:04:51.657000 audit: BPF prog-id=141 op=LOAD Dec 16 13:04:51.667459 kernel: audit: type=1334 audit(1765890291.657:434): prog-id=141 op=LOAD Dec 16 13:04:51.657000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:04:51.658000 audit: BPF prog-id=142 op=LOAD Dec 16 13:04:51.661000 audit: BPF prog-id=96 op=UNLOAD Dec 16 13:04:51.661000 audit: BPF prog-id=143 op=LOAD Dec 16 13:04:51.661000 audit: BPF prog-id=144 op=LOAD Dec 16 13:04:51.661000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:04:51.661000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:04:51.661000 audit: BPF prog-id=145 op=LOAD Dec 16 13:04:51.661000 audit: BPF prog-id=146 op=LOAD Dec 16 13:04:51.661000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:04:51.661000 audit: BPF prog-id=100 op=UNLOAD Dec 16 13:04:51.663000 audit: BPF prog-id=147 op=LOAD Dec 16 13:04:51.663000 audit: BPF prog-id=104 op=UNLOAD Dec 16 13:04:51.663000 audit: BPF prog-id=148 op=LOAD Dec 16 13:04:51.663000 audit: BPF prog-id=149 op=LOAD Dec 16 13:04:51.663000 audit: BPF prog-id=105 op=UNLOAD Dec 16 13:04:51.663000 audit: BPF prog-id=106 op=UNLOAD Dec 16 13:04:51.665000 audit: BPF prog-id=150 op=LOAD Dec 16 13:04:51.665000 audit: BPF prog-id=95 op=UNLOAD Dec 16 13:04:51.666000 audit: BPF prog-id=151 op=LOAD Dec 16 13:04:51.666000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:04:51.666000 audit: BPF prog-id=152 op=LOAD Dec 16 13:04:51.666000 audit: BPF prog-id=153 op=LOAD Dec 16 13:04:51.667000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:04:51.667000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:04:51.668000 audit: BPF prog-id=154 op=LOAD Dec 16 13:04:51.668000 audit: BPF prog-id=101 op=UNLOAD Dec 16 13:04:51.668000 audit: BPF prog-id=155 op=LOAD Dec 16 13:04:51.668000 audit: BPF prog-id=156 op=LOAD Dec 16 13:04:51.668000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:04:51.668000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:04:52.924520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:52.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:52.935432 (kubelet)[3970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:04:52.978765 kubelet[3970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:52.978765 kubelet[3970]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:04:52.978765 kubelet[3970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:52.979076 kubelet[3970]: I1216 13:04:52.978864 3970 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:04:52.987305 kubelet[3970]: I1216 13:04:52.987283 3970 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:04:52.987305 kubelet[3970]: I1216 13:04:52.987302 3970 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:04:52.987499 kubelet[3970]: I1216 13:04:52.987489 3970 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:04:52.989404 kubelet[3970]: I1216 13:04:52.989361 3970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:04:52.992418 kubelet[3970]: I1216 13:04:52.992296 3970 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:52.998985 kubelet[3970]: I1216 13:04:52.998966 3970 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:04:53.005177 kubelet[3970]: I1216 13:04:53.004424 3970 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:04:53.005177 kubelet[3970]: I1216 13:04:53.004601 3970 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:04:53.005177 kubelet[3970]: I1216 13:04:53.004623 3970 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-bc3c22631a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:04:53.005177 kubelet[3970]: I1216 13:04:53.004867 3970 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.004880 3970 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.004952 3970 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.005080 3970 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.005099 3970 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.005122 3970 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:04:53.005383 kubelet[3970]: I1216 13:04:53.005135 3970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:04:53.017894 kubelet[3970]: I1216 13:04:53.017828 3970 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:04:53.018239 kubelet[3970]: I1216 13:04:53.018226 3970 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:04:53.020188 kubelet[3970]: I1216 13:04:53.018619 3970 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:04:53.020188 kubelet[3970]: I1216 13:04:53.018650 3970 server.go:1287] "Started kubelet" Dec 16 13:04:53.020275 kubelet[3970]: I1216 13:04:53.020221 3970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:04:53.025687 kubelet[3970]: I1216 13:04:53.025660 3970 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:04:53.026585 kubelet[3970]: I1216 13:04:53.026532 3970 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:04:53.026991 kubelet[3970]: E1216 13:04:53.026936 3970 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-bc3c22631a\" not found" Dec 16 13:04:53.027440 kubelet[3970]: I1216 13:04:53.027428 3970 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:04:53.028790 kubelet[3970]: I1216 13:04:53.028740 3970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:04:53.028959 kubelet[3970]: I1216 13:04:53.028917 3970 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:04:53.029283 kubelet[3970]: I1216 13:04:53.029021 3970 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:04:53.029416 kubelet[3970]: I1216 13:04:53.029405 3970 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:04:53.030040 kubelet[3970]: I1216 13:04:53.030022 3970 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:04:53.032849 kubelet[3970]: I1216 13:04:53.031026 3970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:04:53.032849 kubelet[3970]: I1216 13:04:53.031983 3970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:04:53.032849 kubelet[3970]: I1216 13:04:53.032008 3970 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:04:53.032849 kubelet[3970]: I1216 13:04:53.032024 3970 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:04:53.032849 kubelet[3970]: I1216 13:04:53.032031 3970 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:04:53.032849 kubelet[3970]: E1216 13:04:53.032067 3970 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:04:53.035020 kubelet[3970]: I1216 13:04:53.034999 3970 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:04:53.041304 kubelet[3970]: I1216 13:04:53.041045 3970 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:04:53.041304 kubelet[3970]: I1216 13:04:53.041230 3970 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:04:53.083105 kubelet[3970]: I1216 13:04:53.083088 3970 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:04:53.083196 kubelet[3970]: I1216 13:04:53.083109 3970 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:04:53.083196 kubelet[3970]: I1216 13:04:53.083125 3970 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:53.083292 kubelet[3970]: I1216 13:04:53.083277 3970 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:04:53.083317 kubelet[3970]: I1216 13:04:53.083289 3970 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:04:53.083317 kubelet[3970]: I1216 13:04:53.083306 3970 policy_none.go:49] "None policy: Start" Dec 16 13:04:53.083317 kubelet[3970]: I1216 13:04:53.083316 3970 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:04:53.083385 kubelet[3970]: I1216 13:04:53.083325 3970 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:04:53.083434 kubelet[3970]: I1216 13:04:53.083424 3970 state_mem.go:75] "Updated machine memory state" Dec 16 13:04:53.088309 kubelet[3970]: I1216 13:04:53.088284 3970 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:04:53.088863 kubelet[3970]: I1216 13:04:53.088426 3970 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:04:53.088863 kubelet[3970]: I1216 13:04:53.088440 3970 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:04:53.088863 kubelet[3970]: I1216 13:04:53.088636 3970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:04:53.090331 kubelet[3970]: E1216 13:04:53.090310 3970 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:04:53.133351 kubelet[3970]: I1216 13:04:53.133333 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.133508 kubelet[3970]: I1216 13:04:53.133500 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.134235 kubelet[3970]: I1216 13:04:53.134217 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.145223 kubelet[3970]: W1216 13:04:53.145178 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:53.149755 kubelet[3970]: W1216 13:04:53.149737 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:53.149884 kubelet[3970]: W1216 13:04:53.149872 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:53.192213 kubelet[3970]: I1216 13:04:53.191063 3970 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.200152 kubelet[3970]: I1216 13:04:53.200120 3970 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.200247 kubelet[3970]: I1216 13:04:53.200205 3970 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230574 kubelet[3970]: I1216 13:04:53.230536 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230640 kubelet[3970]: I1216 13:04:53.230581 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230640 kubelet[3970]: I1216 13:04:53.230602 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f032cd5149ec1f5f6b9d25e9ae9e40e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" (UID: \"f032cd5149ec1f5f6b9d25e9ae9e40e8\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230640 kubelet[3970]: I1216 13:04:53.230622 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230712 kubelet[3970]: I1216 13:04:53.230642 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75871316865ba88fe69691642a612d6a-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-bc3c22631a\" (UID: \"75871316865ba88fe69691642a612d6a\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230712 kubelet[3970]: I1216 13:04:53.230673 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230712 kubelet[3970]: I1216 13:04:53.230690 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230712 kubelet[3970]: I1216 13:04:53.230707 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:53.230792 kubelet[3970]: I1216 13:04:53.230734 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b09e9e7f34ccccbcc607401fa72a4e3b-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" (UID: \"b09e9e7f34ccccbcc607401fa72a4e3b\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.008993 kubelet[3970]: I1216 13:04:54.007902 3970 apiserver.go:52] "Watching apiserver" Dec 16 13:04:54.029366 kubelet[3970]: I1216 13:04:54.029328 3970 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:04:54.067055 kubelet[3970]: I1216 13:04:54.066529 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.067055 kubelet[3970]: I1216 13:04:54.066836 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.067055 kubelet[3970]: I1216 13:04:54.067019 3970 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.086117 kubelet[3970]: W1216 13:04:54.086087 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:54.086237 kubelet[3970]: E1216 13:04:54.086171 3970 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-bc3c22631a\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.086576 kubelet[3970]: W1216 13:04:54.086387 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:54.086576 kubelet[3970]: E1216 13:04:54.086421 3970 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-bc3c22631a\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.088015 kubelet[3970]: W1216 13:04:54.087991 3970 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:04:54.088102 kubelet[3970]: E1216 13:04:54.088037 3970 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-bc3c22631a\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" Dec 16 13:04:54.097996 kubelet[3970]: I1216 13:04:54.097898 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-bc3c22631a" podStartSLOduration=1.097859518 podStartE2EDuration="1.097859518s" podCreationTimestamp="2025-12-16 13:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:54.09693628 +0000 UTC m=+1.157472898" watchObservedRunningTime="2025-12-16 13:04:54.097859518 +0000 UTC m=+1.158396133" Dec 16 13:04:54.119187 kubelet[3970]: I1216 13:04:54.118609 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-bc3c22631a" podStartSLOduration=1.118528392 podStartE2EDuration="1.118528392s" podCreationTimestamp="2025-12-16 13:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:54.1070898 +0000 UTC m=+1.167626432" watchObservedRunningTime="2025-12-16 13:04:54.118528392 +0000 UTC m=+1.179065015" Dec 16 13:04:55.670355 kubelet[3970]: I1216 13:04:55.670314 3970 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:04:55.670867 containerd[2529]: time="2025-12-16T13:04:55.670821800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:04:55.671090 kubelet[3970]: I1216 13:04:55.671077 3970 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:04:56.302624 kubelet[3970]: I1216 13:04:56.302384 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-bc3c22631a" podStartSLOduration=3.3023557119999998 podStartE2EDuration="3.302355712s" podCreationTimestamp="2025-12-16 13:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:54.118682283 +0000 UTC m=+1.179218903" watchObservedRunningTime="2025-12-16 13:04:56.302355712 +0000 UTC m=+3.362892328" Dec 16 13:04:56.314485 systemd[1]: Created slice kubepods-besteffort-pod953dd59f_d9ac_4c44_8d47_7264816fb546.slice - libcontainer container kubepods-besteffort-pod953dd59f_d9ac_4c44_8d47_7264816fb546.slice. Dec 16 13:04:56.348291 kubelet[3970]: I1216 13:04:56.348258 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/953dd59f-d9ac-4c44-8d47-7264816fb546-kube-proxy\") pod \"kube-proxy-ztv6s\" (UID: \"953dd59f-d9ac-4c44-8d47-7264816fb546\") " pod="kube-system/kube-proxy-ztv6s" Dec 16 13:04:56.348433 kubelet[3970]: I1216 13:04:56.348298 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953dd59f-d9ac-4c44-8d47-7264816fb546-xtables-lock\") pod \"kube-proxy-ztv6s\" (UID: \"953dd59f-d9ac-4c44-8d47-7264816fb546\") " pod="kube-system/kube-proxy-ztv6s" Dec 16 13:04:56.348433 kubelet[3970]: I1216 13:04:56.348319 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953dd59f-d9ac-4c44-8d47-7264816fb546-lib-modules\") pod \"kube-proxy-ztv6s\" (UID: \"953dd59f-d9ac-4c44-8d47-7264816fb546\") " pod="kube-system/kube-proxy-ztv6s" Dec 16 13:04:56.348433 kubelet[3970]: I1216 13:04:56.348343 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vd7b\" (UniqueName: \"kubernetes.io/projected/953dd59f-d9ac-4c44-8d47-7264816fb546-kube-api-access-9vd7b\") pod \"kube-proxy-ztv6s\" (UID: \"953dd59f-d9ac-4c44-8d47-7264816fb546\") " pod="kube-system/kube-proxy-ztv6s" Dec 16 13:04:56.454606 kubelet[3970]: E1216 13:04:56.454535 3970 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:04:56.454606 kubelet[3970]: E1216 13:04:56.454567 3970 projected.go:194] Error preparing data for projected volume kube-api-access-9vd7b for pod kube-system/kube-proxy-ztv6s: configmap "kube-root-ca.crt" not found Dec 16 13:04:56.454942 kubelet[3970]: E1216 13:04:56.454741 3970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/953dd59f-d9ac-4c44-8d47-7264816fb546-kube-api-access-9vd7b podName:953dd59f-d9ac-4c44-8d47-7264816fb546 nodeName:}" failed. No retries permitted until 2025-12-16 13:04:56.95471259 +0000 UTC m=+4.015249205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9vd7b" (UniqueName: "kubernetes.io/projected/953dd59f-d9ac-4c44-8d47-7264816fb546-kube-api-access-9vd7b") pod "kube-proxy-ztv6s" (UID: "953dd59f-d9ac-4c44-8d47-7264816fb546") : configmap "kube-root-ca.crt" not found Dec 16 13:04:56.838502 systemd[1]: Created slice kubepods-besteffort-podd45573ea_15d5_4b32_ad68_ad1d4cae8a4c.slice - libcontainer container kubepods-besteffort-podd45573ea_15d5_4b32_ad68_ad1d4cae8a4c.slice. Dec 16 13:04:56.852711 kubelet[3970]: I1216 13:04:56.852615 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8hv\" (UniqueName: \"kubernetes.io/projected/d45573ea-15d5-4b32-ad68-ad1d4cae8a4c-kube-api-access-xc8hv\") pod \"tigera-operator-7dcd859c48-n5j6d\" (UID: \"d45573ea-15d5-4b32-ad68-ad1d4cae8a4c\") " pod="tigera-operator/tigera-operator-7dcd859c48-n5j6d" Dec 16 13:04:56.852711 kubelet[3970]: I1216 13:04:56.852661 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d45573ea-15d5-4b32-ad68-ad1d4cae8a4c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-n5j6d\" (UID: \"d45573ea-15d5-4b32-ad68-ad1d4cae8a4c\") " pod="tigera-operator/tigera-operator-7dcd859c48-n5j6d" Dec 16 13:04:57.142081 containerd[2529]: time="2025-12-16T13:04:57.142028234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n5j6d,Uid:d45573ea-15d5-4b32-ad68-ad1d4cae8a4c,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:04:57.190731 containerd[2529]: time="2025-12-16T13:04:57.190643133Z" level=info msg="connecting to shim cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14" address="unix:///run/containerd/s/71306de9476eeb9042d5b6de60ed0d9c7bded90b08223bdf0f19d52eed624551" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:57.216374 systemd[1]: Started cri-containerd-cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14.scope - libcontainer container cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14. Dec 16 13:04:57.223548 containerd[2529]: time="2025-12-16T13:04:57.223521716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztv6s,Uid:953dd59f-d9ac-4c44-8d47-7264816fb546,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:57.229328 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:04:57.229546 kernel: audit: type=1334 audit(1765890297.226:467): prog-id=157 op=LOAD Dec 16 13:04:57.226000 audit: BPF prog-id=157 op=LOAD Dec 16 13:04:57.231235 kernel: audit: type=1334 audit(1765890297.226:468): prog-id=158 op=LOAD Dec 16 13:04:57.226000 audit: BPF prog-id=158 op=LOAD Dec 16 13:04:57.236085 kernel: audit: type=1300 audit(1765890297.226:468): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.226000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.243481 kernel: audit: type=1327 audit(1765890297.226:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.243560 kernel: audit: type=1334 audit(1765890297.227:469): prog-id=158 op=UNLOAD Dec 16 13:04:57.227000 audit: BPF prog-id=158 op=UNLOAD Dec 16 13:04:57.247976 kernel: audit: type=1300 audit(1765890297.227:469): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.252737 kernel: audit: type=1327 audit(1765890297.227:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.254139 kernel: audit: type=1334 audit(1765890297.227:470): prog-id=159 op=LOAD Dec 16 13:04:57.227000 audit: BPF prog-id=159 op=LOAD Dec 16 13:04:57.259169 kernel: audit: type=1300 audit(1765890297.227:470): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.227000 audit: BPF prog-id=160 op=LOAD Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.227000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.227000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.227000 audit: BPF prog-id=161 op=LOAD Dec 16 13:04:57.227000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.264179 kernel: audit: type=1327 audit(1765890297.227:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363393239666233643739303238393763363733326530353965353936 Dec 16 13:04:57.345823 containerd[2529]: time="2025-12-16T13:04:57.345794658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n5j6d,Uid:d45573ea-15d5-4b32-ad68-ad1d4cae8a4c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14\"" Dec 16 13:04:57.350173 containerd[2529]: time="2025-12-16T13:04:57.350129799Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:04:57.846571 containerd[2529]: time="2025-12-16T13:04:57.846480703Z" level=info msg="connecting to shim e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9" address="unix:///run/containerd/s/a3ac8193d8bf6bf28448d9f17535e024d4b88fa5b11fa90d8c7b5c3402803c67" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:57.869327 systemd[1]: Started cri-containerd-e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9.scope - libcontainer container e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9. Dec 16 13:04:57.878000 audit: BPF prog-id=162 op=LOAD Dec 16 13:04:57.878000 audit: BPF prog-id=163 op=LOAD Dec 16 13:04:57.878000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.878000 audit: BPF prog-id=163 op=UNLOAD Dec 16 13:04:57.878000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.878000 audit: BPF prog-id=164 op=LOAD Dec 16 13:04:57.878000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.878000 audit: BPF prog-id=165 op=LOAD Dec 16 13:04:57.878000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.879000 audit: BPF prog-id=165 op=UNLOAD Dec 16 13:04:57.879000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.879000 audit: BPF prog-id=164 op=UNLOAD Dec 16 13:04:57.879000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.879000 audit: BPF prog-id=166 op=LOAD Dec 16 13:04:57.879000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4071 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:57.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363639663364316334303330613538313136383036383166303834 Dec 16 13:04:57.896551 containerd[2529]: time="2025-12-16T13:04:57.896524258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztv6s,Uid:953dd59f-d9ac-4c44-8d47-7264816fb546,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9\"" Dec 16 13:04:57.899725 containerd[2529]: time="2025-12-16T13:04:57.899697787Z" level=info msg="CreateContainer within sandbox \"e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:04:58.043843 containerd[2529]: time="2025-12-16T13:04:58.043814758Z" level=info msg="Container e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:58.187984 containerd[2529]: time="2025-12-16T13:04:58.187951518Z" level=info msg="CreateContainer within sandbox \"e8669f3d1c4030a5811680681f0841ec0afb1184b12a18870bc42daca310a3e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186\"" Dec 16 13:04:58.188690 containerd[2529]: time="2025-12-16T13:04:58.188620658Z" level=info msg="StartContainer for \"e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186\"" Dec 16 13:04:58.194368 containerd[2529]: time="2025-12-16T13:04:58.194332747Z" level=info msg="connecting to shim e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186" address="unix:///run/containerd/s/a3ac8193d8bf6bf28448d9f17535e024d4b88fa5b11fa90d8c7b5c3402803c67" protocol=ttrpc version=3 Dec 16 13:04:58.264330 systemd[1]: Started cri-containerd-e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186.scope - libcontainer container e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186. Dec 16 13:04:58.310000 audit: BPF prog-id=167 op=LOAD Dec 16 13:04:58.310000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4071 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353863396339626362366231303437346336373839666666653465 Dec 16 13:04:58.310000 audit: BPF prog-id=168 op=LOAD Dec 16 13:04:58.310000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4071 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353863396339626362366231303437346336373839666666653465 Dec 16 13:04:58.311000 audit: BPF prog-id=168 op=UNLOAD Dec 16 13:04:58.311000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4071 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353863396339626362366231303437346336373839666666653465 Dec 16 13:04:58.311000 audit: BPF prog-id=167 op=UNLOAD Dec 16 13:04:58.311000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4071 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353863396339626362366231303437346336373839666666653465 Dec 16 13:04:58.311000 audit: BPF prog-id=169 op=LOAD Dec 16 13:04:58.311000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4071 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353863396339626362366231303437346336373839666666653465 Dec 16 13:04:58.334702 containerd[2529]: time="2025-12-16T13:04:58.334641025Z" level=info msg="StartContainer for \"e058c9c9bcb6b10474c6789fffe4e83fe853c8cd0a86edcf5ee4c836f98a2186\" returns successfully" Dec 16 13:04:58.439000 audit[4173]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.439000 audit[4173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1ef96830 a2=0 a3=7ffe1ef9681c items=0 ppid=4122 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:04:58.441000 audit[4175]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.441000 audit[4175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4a933620 a2=0 a3=7ffd4a93360c items=0 ppid=4122 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:04:58.443000 audit[4176]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.444000 audit[4177]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=4177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.443000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9d2b86b0 a2=0 a3=7ffc9d2b869c items=0 ppid=4122 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:04:58.446000 audit[4178]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.444000 audit[4177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff282ed560 a2=0 a3=7fff282ed54c items=0 ppid=4122 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:04:58.446000 audit[4178]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1ba008c0 a2=0 a3=7ffe1ba008ac items=0 ppid=4122 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:04:58.449000 audit[4179]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.449000 audit[4179]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4a3a1de0 a2=0 a3=7ffe4a3a1dcc items=0 ppid=4122 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:04:58.542000 audit[4180]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.542000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff9ee5b2b0 a2=0 a3=7fff9ee5b29c items=0 ppid=4122 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:04:58.544000 audit[4182]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.544000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd4b01fc50 a2=0 a3=7ffd4b01fc3c items=0 ppid=4122 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 13:04:58.547000 audit[4185]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.547000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9cfdb1d0 a2=0 a3=7ffe9cfdb1bc items=0 ppid=4122 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 13:04:58.548000 audit[4186]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.548000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc061be4b0 a2=0 a3=7ffc061be49c items=0 ppid=4122 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:04:58.551000 audit[4188]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.551000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc509ca6b0 a2=0 a3=7ffc509ca69c items=0 ppid=4122 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:04:58.552000 audit[4189]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.552000 audit[4189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc6d5b990 a2=0 a3=7ffcc6d5b97c items=0 ppid=4122 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:04:58.554000 audit[4191]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.554000 audit[4191]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd62f9100 a2=0 a3=7ffcd62f90ec items=0 ppid=4122 pid=4191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:04:58.557000 audit[4194]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.557000 audit[4194]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff6ecd3cd0 a2=0 a3=7fff6ecd3cbc items=0 ppid=4122 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 13:04:58.558000 audit[4195]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.558000 audit[4195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcce1cb0c0 a2=0 a3=7ffcce1cb0ac items=0 ppid=4122 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.558000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:04:58.561000 audit[4197]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.561000 audit[4197]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff62436ac0 a2=0 a3=7fff62436aac items=0 ppid=4122 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:04:58.562000 audit[4198]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.562000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce325c200 a2=0 a3=7ffce325c1ec items=0 ppid=4122 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:04:58.564000 audit[4200]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.564000 audit[4200]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1bbd8620 a2=0 a3=7ffe1bbd860c items=0 ppid=4122 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:04:58.566000 audit[4203]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.566000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc62424f10 a2=0 a3=7ffc62424efc items=0 ppid=4122 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:04:58.569000 audit[4206]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.569000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9bd65ed0 a2=0 a3=7ffd9bd65ebc items=0 ppid=4122 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:04:58.570000 audit[4207]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.570000 audit[4207]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdb333640 a2=0 a3=7ffcdb33362c items=0 ppid=4122 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:04:58.573000 audit[4209]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.573000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffeb192be80 a2=0 a3=7ffeb192be6c items=0 ppid=4122 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:04:58.575000 audit[4212]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.575000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff84c6bcd0 a2=0 a3=7fff84c6bcbc items=0 ppid=4122 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:04:58.578000 audit[4213]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.578000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc1683540 a2=0 a3=7ffdc168352c items=0 ppid=4122 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:04:58.580000 audit[4215]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:58.580000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffea321dcd0 a2=0 a3=7ffea321dcbc items=0 ppid=4122 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:04:58.683000 audit[4221]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:04:58.683000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6641a050 a2=0 a3=7ffc6641a03c items=0 ppid=4122 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:04:58.713000 audit[4221]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:04:58.713000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc6641a050 a2=0 a3=7ffc6641a03c items=0 ppid=4122 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:04:58.714000 audit[4226]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.714000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff49da7ae0 a2=0 a3=7fff49da7acc items=0 ppid=4122 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:04:58.716000 audit[4228]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.716000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffec3f97e70 a2=0 a3=7ffec3f97e5c items=0 ppid=4122 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 13:04:58.720000 audit[4231]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.720000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5afe54f0 a2=0 a3=7ffc5afe54dc items=0 ppid=4122 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 13:04:58.721000 audit[4232]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.721000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe04603750 a2=0 a3=7ffe0460373c items=0 ppid=4122 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:04:58.723000 audit[4234]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.723000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffd4ac900 a2=0 a3=7ffffd4ac8ec items=0 ppid=4122 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:04:58.724000 audit[4235]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.724000 audit[4235]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe38b62550 a2=0 a3=7ffe38b6253c items=0 ppid=4122 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:04:58.726000 audit[4237]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.726000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff83d6f340 a2=0 a3=7fff83d6f32c items=0 ppid=4122 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 13:04:58.729000 audit[4240]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.729000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc1f995c20 a2=0 a3=7ffc1f995c0c items=0 ppid=4122 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:04:58.730000 audit[4241]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.730000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc26058620 a2=0 a3=7ffc2605860c items=0 ppid=4122 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:04:58.733000 audit[4243]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.733000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc86a6d950 a2=0 a3=7ffc86a6d93c items=0 ppid=4122 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:04:58.734000 audit[4244]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.734000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefe65f9f0 a2=0 a3=7ffefe65f9dc items=0 ppid=4122 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:04:58.736000 audit[4246]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.736000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9a341780 a2=0 a3=7ffd9a34176c items=0 ppid=4122 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:04:58.739000 audit[4249]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.739000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe90477710 a2=0 a3=7ffe904776fc items=0 ppid=4122 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:04:58.742000 audit[4252]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.742000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca0703270 a2=0 a3=7ffca070325c items=0 ppid=4122 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 13:04:58.743000 audit[4253]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.743000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff83992c10 a2=0 a3=7fff83992bfc items=0 ppid=4122 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:04:58.745000 audit[4255]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.745000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe8bb7f440 a2=0 a3=7ffe8bb7f42c items=0 ppid=4122 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:04:58.749000 audit[4258]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.749000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff11c6c8d0 a2=0 a3=7fff11c6c8bc items=0 ppid=4122 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:04:58.751000 audit[4259]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.751000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef6c1e930 a2=0 a3=7ffef6c1e91c items=0 ppid=4122 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:04:58.754000 audit[4261]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.754000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed80025b0 a2=0 a3=7ffed800259c items=0 ppid=4122 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:04:58.755000 audit[4262]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.755000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1b93b0b0 a2=0 a3=7ffe1b93b09c items=0 ppid=4122 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:04:58.756000 audit[4264]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.756000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda8c28e70 a2=0 a3=7ffda8c28e5c items=0 ppid=4122 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:58.759000 audit[4267]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:58.759000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6a168b60 a2=0 a3=7ffc6a168b4c items=0 ppid=4122 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:58.762000 audit[4269]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:04:58.762000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff46f9c460 a2=0 a3=7fff46f9c44c items=0 ppid=4122 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.762000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:04:58.762000 audit[4269]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:04:58.762000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff46f9c460 a2=0 a3=7fff46f9c44c items=0 ppid=4122 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.762000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:01.906730 kubelet[3970]: I1216 13:05:01.906446 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztv6s" podStartSLOduration=5.906422659 podStartE2EDuration="5.906422659s" podCreationTimestamp="2025-12-16 13:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:59.094810677 +0000 UTC m=+6.155347318" watchObservedRunningTime="2025-12-16 13:05:01.906422659 +0000 UTC m=+8.966959275" Dec 16 13:05:03.367803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647650246.mount: Deactivated successfully. Dec 16 13:05:04.078166 containerd[2529]: time="2025-12-16T13:05:04.078100394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:04.081059 containerd[2529]: time="2025-12-16T13:05:04.080874482Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Dec 16 13:05:04.084477 containerd[2529]: time="2025-12-16T13:05:04.084449973Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:04.088302 containerd[2529]: time="2025-12-16T13:05:04.088262737Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:04.089604 containerd[2529]: time="2025-12-16T13:05:04.089224810Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.739047655s" Dec 16 13:05:04.089604 containerd[2529]: time="2025-12-16T13:05:04.089259164Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:05:04.090972 containerd[2529]: time="2025-12-16T13:05:04.090935446Z" level=info msg="CreateContainer within sandbox \"cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:05:04.239684 containerd[2529]: time="2025-12-16T13:05:04.239648165Z" level=info msg="Container 1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:04.337522 containerd[2529]: time="2025-12-16T13:05:04.337361160Z" level=info msg="CreateContainer within sandbox \"cc929fb3d7902897c6732e059e59641b84a7745de0ce8c218604c1d68caf5f14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01\"" Dec 16 13:05:04.339130 containerd[2529]: time="2025-12-16T13:05:04.337919749Z" level=info msg="StartContainer for \"1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01\"" Dec 16 13:05:04.339130 containerd[2529]: time="2025-12-16T13:05:04.338732806Z" level=info msg="connecting to shim 1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01" address="unix:///run/containerd/s/71306de9476eeb9042d5b6de60ed0d9c7bded90b08223bdf0f19d52eed624551" protocol=ttrpc version=3 Dec 16 13:05:04.362381 systemd[1]: Started cri-containerd-1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01.scope - libcontainer container 1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01. Dec 16 13:05:04.375952 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 13:05:04.376054 kernel: audit: type=1334 audit(1765890304.371:539): prog-id=170 op=LOAD Dec 16 13:05:04.371000 audit: BPF prog-id=170 op=LOAD Dec 16 13:05:04.374000 audit: BPF prog-id=171 op=LOAD Dec 16 13:05:04.374000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.383076 kernel: audit: type=1334 audit(1765890304.374:540): prog-id=171 op=LOAD Dec 16 13:05:04.383133 kernel: audit: type=1300 audit(1765890304.374:540): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.389457 kernel: audit: type=1327 audit(1765890304.374:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=171 op=UNLOAD Dec 16 13:05:04.392232 kernel: audit: type=1334 audit(1765890304.375:541): prog-id=171 op=UNLOAD Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.404163 kernel: audit: type=1300 audit(1765890304.375:541): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.404222 kernel: audit: type=1327 audit(1765890304.375:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=172 op=LOAD Dec 16 13:05:04.407180 kernel: audit: type=1334 audit(1765890304.375:542): prog-id=172 op=LOAD Dec 16 13:05:04.407243 kernel: audit: type=1300 audit(1765890304.375:542): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.420455 kernel: audit: type=1327 audit(1765890304.375:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=173 op=LOAD Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=173 op=UNLOAD Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=172 op=UNLOAD Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.375000 audit: BPF prog-id=174 op=LOAD Dec 16 13:05:04.375000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4023 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333730323264353661396431353431623261323637303865376462 Dec 16 13:05:04.446316 containerd[2529]: time="2025-12-16T13:05:04.446270893Z" level=info msg="StartContainer for \"1637022d56a9d1541b2a26708e7dbc0678d4fb1fcfc4471dbdf9b79e56dc0c01\" returns successfully" Dec 16 13:05:10.208420 sudo[2975]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:10.207000 audit[2975]: USER_END pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.210781 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 13:05:10.210849 kernel: audit: type=1106 audit(1765890310.207:547): pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.207000 audit[2975]: CRED_DISP pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.222609 kernel: audit: type=1104 audit(1765890310.207:548): pid=2975 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.310820 sshd[2974]: Connection closed by 10.200.16.10 port 32882 Dec 16 13:05:10.309181 sshd-session[2971]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:10.311000 audit[2971]: USER_END pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:10.311000 audit[2971]: CRED_DISP pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:10.323096 systemd[1]: sshd@6-10.200.4.32:22-10.200.16.10:32882.service: Deactivated successfully. Dec 16 13:05:10.329899 kernel: audit: type=1106 audit(1765890310.311:549): pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:10.329972 kernel: audit: type=1104 audit(1765890310.311:550): pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:10.329025 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:05:10.329283 systemd[1]: session-9.scope: Consumed 3.600s CPU time, 227.3M memory peak. Dec 16 13:05:10.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.32:22-10.200.16.10:32882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.338665 systemd-logind[2509]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:05:10.339187 kernel: audit: type=1131 audit(1765890310.322:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.32:22-10.200.16.10:32882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:10.340526 systemd-logind[2509]: Removed session 9. Dec 16 13:05:11.117000 audit[4357]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.124240 kernel: audit: type=1325 audit(1765890311.117:552): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.117000 audit[4357]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1fad6050 a2=0 a3=7ffc1fad603c items=0 ppid=4122 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.135174 kernel: audit: type=1300 audit(1765890311.117:552): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc1fad6050 a2=0 a3=7ffc1fad603c items=0 ppid=4122 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.134000 audit[4357]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.142722 kernel: audit: type=1327 audit(1765890311.117:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.142798 kernel: audit: type=1325 audit(1765890311.134:553): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.134000 audit[4357]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fad6050 a2=0 a3=0 items=0 ppid=4122 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.149733 kernel: audit: type=1300 audit(1765890311.134:553): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fad6050 a2=0 a3=0 items=0 ppid=4122 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.161000 audit[4359]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.161000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd29ea40c0 a2=0 a3=7ffd29ea40ac items=0 ppid=4122 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.166000 audit[4359]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.166000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd29ea40c0 a2=0 a3=0 items=0 ppid=4122 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:14.458000 audit[4361]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:14.458000 audit[4361]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6324aa70 a2=0 a3=7ffc6324aa5c items=0 ppid=4122 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:14.466000 audit[4361]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:14.466000 audit[4361]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6324aa70 a2=0 a3=0 items=0 ppid=4122 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:14.482000 audit[4364]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:14.482000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd4216970 a2=0 a3=7ffdd421695c items=0 ppid=4122 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:14.492000 audit[4364]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:14.492000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd4216970 a2=0 a3=0 items=0 ppid=4122 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:15.509568 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 13:05:15.509723 kernel: audit: type=1325 audit(1765890315.503:560): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:15.503000 audit[4367]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:15.515684 kernel: audit: type=1300 audit(1765890315.503:560): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6f01fa50 a2=0 a3=7ffd6f01fa3c items=0 ppid=4122 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.503000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6f01fa50 a2=0 a3=7ffd6f01fa3c items=0 ppid=4122 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:15.520219 kernel: audit: type=1327 audit(1765890315.503:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:15.515000 audit[4367]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:15.529523 kernel: audit: type=1325 audit(1765890315.515:561): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:15.529584 kernel: audit: type=1300 audit(1765890315.515:561): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6f01fa50 a2=0 a3=0 items=0 ppid=4122 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.515000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6f01fa50 a2=0 a3=0 items=0 ppid=4122 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:15.532712 kernel: audit: type=1327 audit(1765890315.515:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:15.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:16.292559 kubelet[3970]: I1216 13:05:16.292474 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-n5j6d" podStartSLOduration=13.55139034 podStartE2EDuration="20.292450543s" podCreationTimestamp="2025-12-16 13:04:56 +0000 UTC" firstStartedPulling="2025-12-16 13:04:57.348806172 +0000 UTC m=+4.409342793" lastFinishedPulling="2025-12-16 13:05:04.089866388 +0000 UTC m=+11.150402996" observedRunningTime="2025-12-16 13:05:05.108221433 +0000 UTC m=+12.168758052" watchObservedRunningTime="2025-12-16 13:05:16.292450543 +0000 UTC m=+23.352987180" Dec 16 13:05:16.304494 systemd[1]: Created slice kubepods-besteffort-pode2fc7c97_3d33_427d_8aec_a81bac573634.slice - libcontainer container kubepods-besteffort-pode2fc7c97_3d33_427d_8aec_a81bac573634.slice. Dec 16 13:05:16.381765 kubelet[3970]: I1216 13:05:16.381732 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2gf\" (UniqueName: \"kubernetes.io/projected/e2fc7c97-3d33-427d-8aec-a81bac573634-kube-api-access-kt2gf\") pod \"calico-typha-5d7764cb5c-95w88\" (UID: \"e2fc7c97-3d33-427d-8aec-a81bac573634\") " pod="calico-system/calico-typha-5d7764cb5c-95w88" Dec 16 13:05:16.381897 kubelet[3970]: I1216 13:05:16.381770 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2fc7c97-3d33-427d-8aec-a81bac573634-tigera-ca-bundle\") pod \"calico-typha-5d7764cb5c-95w88\" (UID: \"e2fc7c97-3d33-427d-8aec-a81bac573634\") " pod="calico-system/calico-typha-5d7764cb5c-95w88" Dec 16 13:05:16.381897 kubelet[3970]: I1216 13:05:16.381790 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2fc7c97-3d33-427d-8aec-a81bac573634-typha-certs\") pod \"calico-typha-5d7764cb5c-95w88\" (UID: \"e2fc7c97-3d33-427d-8aec-a81bac573634\") " pod="calico-system/calico-typha-5d7764cb5c-95w88" Dec 16 13:05:16.453595 systemd[1]: Created slice kubepods-besteffort-pod2ddc2d62_24f8_4470_a71f_98de1e2ed965.slice - libcontainer container kubepods-besteffort-pod2ddc2d62_24f8_4470_a71f_98de1e2ed965.slice. Dec 16 13:05:16.482888 kubelet[3970]: I1216 13:05:16.482859 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-lib-modules\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.483093 kubelet[3970]: I1216 13:05:16.482892 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvjmm\" (UniqueName: \"kubernetes.io/projected/2ddc2d62-24f8-4470-a71f-98de1e2ed965-kube-api-access-vvjmm\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.483093 kubelet[3970]: I1216 13:05:16.482938 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-cni-net-dir\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.483093 kubelet[3970]: I1216 13:05:16.482957 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-var-run-calico\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.483093 kubelet[3970]: I1216 13:05:16.482987 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-flexvol-driver-host\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.483093 kubelet[3970]: I1216 13:05:16.483004 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-var-lib-calico\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484082 kubelet[3970]: I1216 13:05:16.483024 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2ddc2d62-24f8-4470-a71f-98de1e2ed965-node-certs\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484082 kubelet[3970]: I1216 13:05:16.483043 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-policysync\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484082 kubelet[3970]: I1216 13:05:16.483059 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ddc2d62-24f8-4470-a71f-98de1e2ed965-tigera-ca-bundle\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484082 kubelet[3970]: I1216 13:05:16.483076 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-xtables-lock\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484082 kubelet[3970]: I1216 13:05:16.483093 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-cni-bin-dir\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.484274 kubelet[3970]: I1216 13:05:16.483110 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2ddc2d62-24f8-4470-a71f-98de1e2ed965-cni-log-dir\") pod \"calico-node-sghrm\" (UID: \"2ddc2d62-24f8-4470-a71f-98de1e2ed965\") " pod="calico-system/calico-node-sghrm" Dec 16 13:05:16.527000 audit[4371]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:16.538196 kernel: audit: type=1325 audit(1765890316.527:562): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:16.538276 kernel: audit: type=1300 audit(1765890316.527:562): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3ece12b0 a2=0 a3=7fff3ece129c items=0 ppid=4122 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.527000 audit[4371]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3ece12b0 a2=0 a3=7fff3ece129c items=0 ppid=4122 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:16.543187 kernel: audit: type=1327 audit(1765890316.527:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:16.538000 audit[4371]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:16.548179 kernel: audit: type=1325 audit(1765890316.538:563): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:16.538000 audit[4371]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3ece12b0 a2=0 a3=0 items=0 ppid=4122 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:16.586031 kubelet[3970]: E1216 13:05:16.585356 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.586031 kubelet[3970]: W1216 13:05:16.585929 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.586031 kubelet[3970]: E1216 13:05:16.585973 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.586617 kubelet[3970]: E1216 13:05:16.586553 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.586754 kubelet[3970]: W1216 13:05:16.586733 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.586871 kubelet[3970]: E1216 13:05:16.586848 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.587263 kubelet[3970]: E1216 13:05:16.587229 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.587400 kubelet[3970]: W1216 13:05:16.587321 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.587475 kubelet[3970]: E1216 13:05:16.587466 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.587778 kubelet[3970]: E1216 13:05:16.587680 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.587778 kubelet[3970]: W1216 13:05:16.587735 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.587778 kubelet[3970]: E1216 13:05:16.587748 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.590373 kubelet[3970]: E1216 13:05:16.590360 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.590521 kubelet[3970]: W1216 13:05:16.590424 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.590521 kubelet[3970]: E1216 13:05:16.590441 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.590701 kubelet[3970]: E1216 13:05:16.590694 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.590742 kubelet[3970]: W1216 13:05:16.590735 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.591246 kubelet[3970]: E1216 13:05:16.591228 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.591550 kubelet[3970]: E1216 13:05:16.591516 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.591606 kubelet[3970]: W1216 13:05:16.591598 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.591678 kubelet[3970]: E1216 13:05:16.591669 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.591969 kubelet[3970]: E1216 13:05:16.591924 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.591969 kubelet[3970]: W1216 13:05:16.591933 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.591969 kubelet[3970]: E1216 13:05:16.591944 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.599028 kubelet[3970]: E1216 13:05:16.599013 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.599261 kubelet[3970]: W1216 13:05:16.599217 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.600141 kubelet[3970]: E1216 13:05:16.599841 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.600141 kubelet[3970]: W1216 13:05:16.599860 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.600141 kubelet[3970]: E1216 13:05:16.599873 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.600319 kubelet[3970]: E1216 13:05:16.600293 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.603641 kubelet[3970]: E1216 13:05:16.603625 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.603641 kubelet[3970]: W1216 13:05:16.603641 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.603742 kubelet[3970]: E1216 13:05:16.603652 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.617309 containerd[2529]: time="2025-12-16T13:05:16.617268298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7764cb5c-95w88,Uid:e2fc7c97-3d33-427d-8aec-a81bac573634,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:16.673730 kubelet[3970]: E1216 13:05:16.673688 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:16.675304 containerd[2529]: time="2025-12-16T13:05:16.675265441Z" level=info msg="connecting to shim a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc" address="unix:///run/containerd/s/b65d971bcf4f0baa4440b4fb5457d411486c1db1ea54cd932e8ad2f7e6c612c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:16.732354 systemd[1]: Started cri-containerd-a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc.scope - libcontainer container a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc. Dec 16 13:05:16.758179 containerd[2529]: time="2025-12-16T13:05:16.757192658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sghrm,Uid:2ddc2d62-24f8-4470-a71f-98de1e2ed965,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:16.767727 kubelet[3970]: E1216 13:05:16.767707 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.767727 kubelet[3970]: W1216 13:05:16.767728 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.767745 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.767863 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769528 kubelet[3970]: W1216 13:05:16.767869 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.767877 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.767985 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769528 kubelet[3970]: W1216 13:05:16.767991 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.767997 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.768144 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769528 kubelet[3970]: W1216 13:05:16.768151 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769528 kubelet[3970]: E1216 13:05:16.768181 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768295 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769814 kubelet[3970]: W1216 13:05:16.768300 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768307 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768403 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769814 kubelet[3970]: W1216 13:05:16.768409 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768416 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768511 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.769814 kubelet[3970]: W1216 13:05:16.768515 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768521 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.769814 kubelet[3970]: E1216 13:05:16.768613 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.770086 kubelet[3970]: W1216 13:05:16.768618 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.770086 kubelet[3970]: E1216 13:05:16.768625 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.770086 kubelet[3970]: E1216 13:05:16.768726 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.770086 kubelet[3970]: W1216 13:05:16.768731 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.770086 kubelet[3970]: E1216 13:05:16.768736 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.770309 kubelet[3970]: E1216 13:05:16.770295 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.770341 kubelet[3970]: W1216 13:05:16.770311 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.770341 kubelet[3970]: E1216 13:05:16.770325 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.770446 kubelet[3970]: E1216 13:05:16.770437 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.770474 kubelet[3970]: W1216 13:05:16.770447 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.770474 kubelet[3970]: E1216 13:05:16.770454 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770558 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771026 kubelet[3970]: W1216 13:05:16.770565 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770572 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770671 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771026 kubelet[3970]: W1216 13:05:16.770676 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770681 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770764 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771026 kubelet[3970]: W1216 13:05:16.770769 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770775 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771026 kubelet[3970]: E1216 13:05:16.770858 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771309 kubelet[3970]: W1216 13:05:16.770863 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.770869 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.770958 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771309 kubelet[3970]: W1216 13:05:16.770963 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.770968 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.771064 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771309 kubelet[3970]: W1216 13:05:16.771070 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.771076 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.771309 kubelet[3970]: E1216 13:05:16.771182 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.771309 kubelet[3970]: W1216 13:05:16.771187 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.771591 kubelet[3970]: E1216 13:05:16.771193 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.772311 kubelet[3970]: E1216 13:05:16.772297 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.772311 kubelet[3970]: W1216 13:05:16.772311 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.772405 kubelet[3970]: E1216 13:05:16.772324 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.773084 kubelet[3970]: E1216 13:05:16.773070 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.773084 kubelet[3970]: W1216 13:05:16.773084 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.773864 kubelet[3970]: E1216 13:05:16.773096 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.785414 kubelet[3970]: E1216 13:05:16.785398 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.785414 kubelet[3970]: W1216 13:05:16.785413 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.785612 kubelet[3970]: E1216 13:05:16.785425 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.786211 kubelet[3970]: I1216 13:05:16.786193 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38439d67-e506-407a-b65d-e7dd3b4f13ff-kubelet-dir\") pod \"csi-node-driver-swgr5\" (UID: \"38439d67-e506-407a-b65d-e7dd3b4f13ff\") " pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:16.786398 kubelet[3970]: E1216 13:05:16.786377 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.786398 kubelet[3970]: W1216 13:05:16.786397 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.786496 kubelet[3970]: E1216 13:05:16.786487 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.786525 kubelet[3970]: I1216 13:05:16.786510 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/38439d67-e506-407a-b65d-e7dd3b4f13ff-varrun\") pod \"csi-node-driver-swgr5\" (UID: \"38439d67-e506-407a-b65d-e7dd3b4f13ff\") " pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:16.786688 kubelet[3970]: E1216 13:05:16.786678 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.786820 kubelet[3970]: W1216 13:05:16.786690 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.786820 kubelet[3970]: E1216 13:05:16.786724 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.786820 kubelet[3970]: I1216 13:05:16.786742 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38439d67-e506-407a-b65d-e7dd3b4f13ff-registration-dir\") pod \"csi-node-driver-swgr5\" (UID: \"38439d67-e506-407a-b65d-e7dd3b4f13ff\") " pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:16.786927 kubelet[3970]: E1216 13:05:16.786896 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.786927 kubelet[3970]: W1216 13:05:16.786903 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.786927 kubelet[3970]: E1216 13:05:16.786912 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.787257 kubelet[3970]: E1216 13:05:16.787239 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.787257 kubelet[3970]: W1216 13:05:16.787254 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.787313 kubelet[3970]: E1216 13:05:16.787266 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.787342 kubelet[3970]: I1216 13:05:16.787327 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38439d67-e506-407a-b65d-e7dd3b4f13ff-socket-dir\") pod \"csi-node-driver-swgr5\" (UID: \"38439d67-e506-407a-b65d-e7dd3b4f13ff\") " pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:16.787473 kubelet[3970]: E1216 13:05:16.787459 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.787473 kubelet[3970]: W1216 13:05:16.787469 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.788118 kubelet[3970]: E1216 13:05:16.788103 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.788118 kubelet[3970]: W1216 13:05:16.788118 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.788437 kubelet[3970]: E1216 13:05:16.788424 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.788477 kubelet[3970]: W1216 13:05:16.788439 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.788477 kubelet[3970]: E1216 13:05:16.788451 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.788477 kubelet[3970]: E1216 13:05:16.788465 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.788477 kubelet[3970]: E1216 13:05:16.788477 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.788915 kubelet[3970]: E1216 13:05:16.788848 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.788915 kubelet[3970]: W1216 13:05:16.788858 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.789358 kubelet[3970]: E1216 13:05:16.789337 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.789511 kubelet[3970]: E1216 13:05:16.789430 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.789511 kubelet[3970]: W1216 13:05:16.789442 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.789511 kubelet[3970]: E1216 13:05:16.789453 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.789826 kubelet[3970]: E1216 13:05:16.789759 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.789826 kubelet[3970]: W1216 13:05:16.789769 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.789826 kubelet[3970]: E1216 13:05:16.789779 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.790085 kubelet[3970]: E1216 13:05:16.790025 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.790085 kubelet[3970]: W1216 13:05:16.790033 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.790085 kubelet[3970]: E1216 13:05:16.790041 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.790320 kubelet[3970]: E1216 13:05:16.790312 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.790458 kubelet[3970]: W1216 13:05:16.790362 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.790458 kubelet[3970]: E1216 13:05:16.790373 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.790458 kubelet[3970]: I1216 13:05:16.790409 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4hr\" (UniqueName: \"kubernetes.io/projected/38439d67-e506-407a-b65d-e7dd3b4f13ff-kube-api-access-2q4hr\") pod \"csi-node-driver-swgr5\" (UID: \"38439d67-e506-407a-b65d-e7dd3b4f13ff\") " pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:16.790839 kubelet[3970]: E1216 13:05:16.790795 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.790839 kubelet[3970]: W1216 13:05:16.790809 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.790839 kubelet[3970]: E1216 13:05:16.790820 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.791311 kubelet[3970]: E1216 13:05:16.791303 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.791406 kubelet[3970]: W1216 13:05:16.791374 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.791406 kubelet[3970]: E1216 13:05:16.791389 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.815377 containerd[2529]: time="2025-12-16T13:05:16.815294164Z" level=info msg="connecting to shim d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184" address="unix:///run/containerd/s/82837e815465eb95658234395a287fbc727dafd9f1d6fe2d1d8fc4dd2fe5fa70" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:16.828000 audit: BPF prog-id=175 op=LOAD Dec 16 13:05:16.832000 audit: BPF prog-id=176 op=LOAD Dec 16 13:05:16.832000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.832000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:05:16.832000 audit[4411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.833000 audit: BPF prog-id=177 op=LOAD Dec 16 13:05:16.833000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.833000 audit: BPF prog-id=178 op=LOAD Dec 16 13:05:16.833000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.833000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:05:16.833000 audit[4411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.833000 audit: BPF prog-id=177 op=UNLOAD Dec 16 13:05:16.833000 audit[4411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.833000 audit: BPF prog-id=179 op=LOAD Dec 16 13:05:16.833000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4394 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646661396363303364366566633936313064316133336664613761 Dec 16 13:05:16.856679 systemd[1]: Started cri-containerd-d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184.scope - libcontainer container d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184. Dec 16 13:05:16.890944 kubelet[3970]: E1216 13:05:16.890887 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.890944 kubelet[3970]: W1216 13:05:16.890903 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891066 kubelet[3970]: E1216 13:05:16.890952 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.891201 kubelet[3970]: E1216 13:05:16.891189 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.891201 kubelet[3970]: W1216 13:05:16.891197 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891265 kubelet[3970]: E1216 13:05:16.891213 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.891389 kubelet[3970]: E1216 13:05:16.891377 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.891389 kubelet[3970]: W1216 13:05:16.891387 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891511 kubelet[3970]: E1216 13:05:16.891402 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.891572 kubelet[3970]: E1216 13:05:16.891548 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.891572 kubelet[3970]: W1216 13:05:16.891557 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891572 kubelet[3970]: E1216 13:05:16.891565 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.891730 kubelet[3970]: E1216 13:05:16.891723 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.891757 kubelet[3970]: W1216 13:05:16.891731 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891757 kubelet[3970]: E1216 13:05:16.891748 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.891874 kubelet[3970]: E1216 13:05:16.891867 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.891967 kubelet[3970]: W1216 13:05:16.891910 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.891967 kubelet[3970]: E1216 13:05:16.891924 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.892176 kubelet[3970]: E1216 13:05:16.892141 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.892255 kubelet[3970]: W1216 13:05:16.892151 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.892333 kubelet[3970]: E1216 13:05:16.892236 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.893229 kubelet[3970]: E1216 13:05:16.893213 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.893229 kubelet[3970]: W1216 13:05:16.893228 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.893676 kubelet[3970]: E1216 13:05:16.893389 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.893676 kubelet[3970]: W1216 13:05:16.893396 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.893676 kubelet[3970]: E1216 13:05:16.893483 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.893676 kubelet[3970]: E1216 13:05:16.893518 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.893676 kubelet[3970]: E1216 13:05:16.893556 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.893676 kubelet[3970]: W1216 13:05:16.893563 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.894176 kubelet[3970]: E1216 13:05:16.894118 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.895073 kubelet[3970]: E1216 13:05:16.894882 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.895073 kubelet[3970]: W1216 13:05:16.894898 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.895769 kubelet[3970]: E1216 13:05:16.895149 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.895769 kubelet[3970]: W1216 13:05:16.895262 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.895769 kubelet[3970]: E1216 13:05:16.895589 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.895769 kubelet[3970]: E1216 13:05:16.895608 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.895769 kubelet[3970]: E1216 13:05:16.895724 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.895769 kubelet[3970]: W1216 13:05:16.895731 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.895932 kubelet[3970]: E1216 13:05:16.895792 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.897086 kubelet[3970]: E1216 13:05:16.897067 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.897086 kubelet[3970]: W1216 13:05:16.897083 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.897399 kubelet[3970]: E1216 13:05:16.897226 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.897658 kubelet[3970]: E1216 13:05:16.897646 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.897697 kubelet[3970]: W1216 13:05:16.897658 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.897000 audit: BPF prog-id=180 op=LOAD Dec 16 13:05:16.898790 kubelet[3970]: E1216 13:05:16.898768 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.899404 kubelet[3970]: E1216 13:05:16.899385 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.899404 kubelet[3970]: W1216 13:05:16.899399 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.899729 kubelet[3970]: E1216 13:05:16.899712 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.898000 audit: BPF prog-id=181 op=LOAD Dec 16 13:05:16.898000 audit[4490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.898000 audit: BPF prog-id=181 op=UNLOAD Dec 16 13:05:16.898000 audit[4490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.900621 kubelet[3970]: E1216 13:05:16.900059 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.900621 kubelet[3970]: W1216 13:05:16.900069 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.900621 kubelet[3970]: E1216 13:05:16.900148 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.900621 kubelet[3970]: E1216 13:05:16.900484 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.900621 kubelet[3970]: W1216 13:05:16.900496 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.900621 kubelet[3970]: E1216 13:05:16.900590 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.901398 kubelet[3970]: E1216 13:05:16.901382 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.901398 kubelet[3970]: W1216 13:05:16.901394 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.900000 audit: BPF prog-id=182 op=LOAD Dec 16 13:05:16.900000 audit[4490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.900000 audit: BPF prog-id=183 op=LOAD Dec 16 13:05:16.900000 audit[4490]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.900000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:05:16.900000 audit[4490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.900000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:05:16.900000 audit[4490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.900000 audit: BPF prog-id=184 op=LOAD Dec 16 13:05:16.900000 audit[4490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:16.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435363432303432383364343438326537323738343134616365616163 Dec 16 13:05:16.902183 kubelet[3970]: E1216 13:05:16.902136 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.902183 kubelet[3970]: W1216 13:05:16.902146 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.902319 kubelet[3970]: E1216 13:05:16.902272 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.902319 kubelet[3970]: E1216 13:05:16.902310 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.903212 kubelet[3970]: E1216 13:05:16.903190 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.903212 kubelet[3970]: W1216 13:05:16.903209 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.903307 kubelet[3970]: E1216 13:05:16.903292 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.903427 kubelet[3970]: E1216 13:05:16.903419 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.903461 kubelet[3970]: W1216 13:05:16.903427 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.903514 kubelet[3970]: E1216 13:05:16.903501 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.903655 kubelet[3970]: E1216 13:05:16.903645 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.903655 kubelet[3970]: W1216 13:05:16.903653 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.903798 kubelet[3970]: E1216 13:05:16.903786 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.903845 kubelet[3970]: E1216 13:05:16.903831 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.903845 kubelet[3970]: W1216 13:05:16.903837 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.903951 kubelet[3970]: E1216 13:05:16.903939 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.905065 kubelet[3970]: E1216 13:05:16.905040 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.905148 kubelet[3970]: W1216 13:05:16.905079 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.905148 kubelet[3970]: E1216 13:05:16.905094 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.913020 kubelet[3970]: E1216 13:05:16.912991 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:16.913020 kubelet[3970]: W1216 13:05:16.913002 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:16.913020 kubelet[3970]: E1216 13:05:16.913014 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:16.917566 containerd[2529]: time="2025-12-16T13:05:16.917533605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7764cb5c-95w88,Uid:e2fc7c97-3d33-427d-8aec-a81bac573634,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc\"" Dec 16 13:05:16.920354 containerd[2529]: time="2025-12-16T13:05:16.920329225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:05:16.931851 containerd[2529]: time="2025-12-16T13:05:16.931822643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sghrm,Uid:2ddc2d62-24f8-4470-a71f-98de1e2ed965,Namespace:calico-system,Attempt:0,} returns sandbox id \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\"" Dec 16 13:05:18.033143 kubelet[3970]: E1216 13:05:18.033087 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:18.301233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692458860.mount: Deactivated successfully. Dec 16 13:05:19.595417 containerd[2529]: time="2025-12-16T13:05:19.595362254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.602314 containerd[2529]: time="2025-12-16T13:05:19.602184786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:05:19.611137 containerd[2529]: time="2025-12-16T13:05:19.611108255Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.623856 containerd[2529]: time="2025-12-16T13:05:19.623801832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.624284 containerd[2529]: time="2025-12-16T13:05:19.624171228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.703083674s" Dec 16 13:05:19.624284 containerd[2529]: time="2025-12-16T13:05:19.624204858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:05:19.625439 containerd[2529]: time="2025-12-16T13:05:19.625354414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:05:19.634737 containerd[2529]: time="2025-12-16T13:05:19.634632053Z" level=info msg="CreateContainer within sandbox \"a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:05:19.662704 containerd[2529]: time="2025-12-16T13:05:19.662680855Z" level=info msg="Container f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:19.686797 containerd[2529]: time="2025-12-16T13:05:19.686769331Z" level=info msg="CreateContainer within sandbox \"a0dfa9cc03d6efc9610d1a33fda7aa4db6fdd303151fb5ded19dc18231a970fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e\"" Dec 16 13:05:19.687244 containerd[2529]: time="2025-12-16T13:05:19.687217650Z" level=info msg="StartContainer for \"f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e\"" Dec 16 13:05:19.689120 containerd[2529]: time="2025-12-16T13:05:19.689077301Z" level=info msg="connecting to shim f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e" address="unix:///run/containerd/s/b65d971bcf4f0baa4440b4fb5457d411486c1db1ea54cd932e8ad2f7e6c612c8" protocol=ttrpc version=3 Dec 16 13:05:19.712365 systemd[1]: Started cri-containerd-f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e.scope - libcontainer container f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e. Dec 16 13:05:19.721000 audit: BPF prog-id=185 op=LOAD Dec 16 13:05:19.721000 audit: BPF prog-id=186 op=LOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=186 op=UNLOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=187 op=LOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=188 op=LOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=187 op=UNLOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.721000 audit: BPF prog-id=189 op=LOAD Dec 16 13:05:19.721000 audit[4561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4394 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:19.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632323664313933386663316365336665343630336562306634376138 Dec 16 13:05:19.758751 containerd[2529]: time="2025-12-16T13:05:19.758712298Z" level=info msg="StartContainer for \"f226d1938fc1ce3fe4603eb0f47a80ad49c2e79370d0f603dafa4da8776e607e\" returns successfully" Dec 16 13:05:20.032851 kubelet[3970]: E1216 13:05:20.032798 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:20.191654 kubelet[3970]: E1216 13:05:20.191553 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.191654 kubelet[3970]: W1216 13:05:20.191604 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.191974 kubelet[3970]: E1216 13:05:20.191630 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.192171 kubelet[3970]: E1216 13:05:20.192117 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.192171 kubelet[3970]: W1216 13:05:20.192132 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.192289 kubelet[3970]: E1216 13:05:20.192148 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.192485 kubelet[3970]: E1216 13:05:20.192452 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.192485 kubelet[3970]: W1216 13:05:20.192462 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.192599 kubelet[3970]: E1216 13:05:20.192548 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.192825 kubelet[3970]: E1216 13:05:20.192818 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.192954 kubelet[3970]: W1216 13:05:20.192863 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.192954 kubelet[3970]: E1216 13:05:20.192885 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.193254 kubelet[3970]: E1216 13:05:20.193236 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.193381 kubelet[3970]: W1216 13:05:20.193305 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.193381 kubelet[3970]: E1216 13:05:20.193316 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.193548 kubelet[3970]: E1216 13:05:20.193509 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.193548 kubelet[3970]: W1216 13:05:20.193516 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.193548 kubelet[3970]: E1216 13:05:20.193523 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.193778 kubelet[3970]: E1216 13:05:20.193735 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.193778 kubelet[3970]: W1216 13:05:20.193742 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.193778 kubelet[3970]: E1216 13:05:20.193749 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.194013 kubelet[3970]: E1216 13:05:20.193960 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.194013 kubelet[3970]: W1216 13:05:20.193967 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.194013 kubelet[3970]: E1216 13:05:20.193975 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.194266 kubelet[3970]: E1216 13:05:20.194213 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.194266 kubelet[3970]: W1216 13:05:20.194220 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.194266 kubelet[3970]: E1216 13:05:20.194228 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.194492 kubelet[3970]: E1216 13:05:20.194484 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.194566 kubelet[3970]: W1216 13:05:20.194541 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.194566 kubelet[3970]: E1216 13:05:20.194564 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.194706 kubelet[3970]: E1216 13:05:20.194697 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.194854 kubelet[3970]: W1216 13:05:20.194706 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.194854 kubelet[3970]: E1216 13:05:20.194714 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.194854 kubelet[3970]: E1216 13:05:20.194842 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.194854 kubelet[3970]: W1216 13:05:20.194847 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.194956 kubelet[3970]: E1216 13:05:20.194854 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.195237 kubelet[3970]: E1216 13:05:20.194979 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.195237 kubelet[3970]: W1216 13:05:20.194985 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.195237 kubelet[3970]: E1216 13:05:20.194992 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.195237 kubelet[3970]: E1216 13:05:20.195111 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.195237 kubelet[3970]: W1216 13:05:20.195118 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.195237 kubelet[3970]: E1216 13:05:20.195126 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.195389 kubelet[3970]: E1216 13:05:20.195265 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.195389 kubelet[3970]: W1216 13:05:20.195271 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.195389 kubelet[3970]: E1216 13:05:20.195278 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.213705 kubelet[3970]: E1216 13:05:20.213680 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.213705 kubelet[3970]: W1216 13:05:20.213697 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.213840 kubelet[3970]: E1216 13:05:20.213711 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.213958 kubelet[3970]: E1216 13:05:20.213935 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.213958 kubelet[3970]: W1216 13:05:20.213955 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214025 kubelet[3970]: E1216 13:05:20.213969 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.214119 kubelet[3970]: E1216 13:05:20.214105 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.214119 kubelet[3970]: W1216 13:05:20.214115 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214194 kubelet[3970]: E1216 13:05:20.214124 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.214316 kubelet[3970]: E1216 13:05:20.214298 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.214316 kubelet[3970]: W1216 13:05:20.214314 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214384 kubelet[3970]: E1216 13:05:20.214325 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.214463 kubelet[3970]: E1216 13:05:20.214452 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.214463 kubelet[3970]: W1216 13:05:20.214460 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214517 kubelet[3970]: E1216 13:05:20.214467 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.214613 kubelet[3970]: E1216 13:05:20.214589 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.214613 kubelet[3970]: W1216 13:05:20.214610 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214666 kubelet[3970]: E1216 13:05:20.214624 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.214822 kubelet[3970]: E1216 13:05:20.214800 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.214822 kubelet[3970]: W1216 13:05:20.214818 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.214869 kubelet[3970]: E1216 13:05:20.214833 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215094 kubelet[3970]: E1216 13:05:20.215082 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215094 kubelet[3970]: W1216 13:05:20.215092 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215176 kubelet[3970]: E1216 13:05:20.215105 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215259 kubelet[3970]: E1216 13:05:20.215247 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215259 kubelet[3970]: W1216 13:05:20.215256 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215319 kubelet[3970]: E1216 13:05:20.215264 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215387 kubelet[3970]: E1216 13:05:20.215375 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215387 kubelet[3970]: W1216 13:05:20.215383 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215445 kubelet[3970]: E1216 13:05:20.215392 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215514 kubelet[3970]: E1216 13:05:20.215503 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215514 kubelet[3970]: W1216 13:05:20.215511 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215615 kubelet[3970]: E1216 13:05:20.215518 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215647 kubelet[3970]: E1216 13:05:20.215630 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215647 kubelet[3970]: W1216 13:05:20.215636 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215697 kubelet[3970]: E1216 13:05:20.215654 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215810 kubelet[3970]: E1216 13:05:20.215793 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215810 kubelet[3970]: W1216 13:05:20.215807 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.215862 kubelet[3970]: E1216 13:05:20.215823 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.215983 kubelet[3970]: E1216 13:05:20.215967 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.215983 kubelet[3970]: W1216 13:05:20.215980 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.216036 kubelet[3970]: E1216 13:05:20.215992 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.216322 kubelet[3970]: E1216 13:05:20.216216 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.216322 kubelet[3970]: W1216 13:05:20.216227 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.216322 kubelet[3970]: E1216 13:05:20.216236 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.216573 kubelet[3970]: E1216 13:05:20.216561 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.216611 kubelet[3970]: W1216 13:05:20.216573 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.216611 kubelet[3970]: E1216 13:05:20.216598 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.216776 kubelet[3970]: E1216 13:05:20.216765 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.216776 kubelet[3970]: W1216 13:05:20.216773 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.216832 kubelet[3970]: E1216 13:05:20.216781 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:20.217026 kubelet[3970]: E1216 13:05:20.217016 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:20.217026 kubelet[3970]: W1216 13:05:20.217025 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:20.217086 kubelet[3970]: E1216 13:05:20.217032 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.118029 containerd[2529]: time="2025-12-16T13:05:21.117958258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:21.122008 containerd[2529]: time="2025-12-16T13:05:21.121864852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Dec 16 13:05:21.125452 containerd[2529]: time="2025-12-16T13:05:21.125414461Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:21.126715 kubelet[3970]: I1216 13:05:21.126688 3970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:21.129744 containerd[2529]: time="2025-12-16T13:05:21.129199623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:21.129744 containerd[2529]: time="2025-12-16T13:05:21.129633684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.504132824s" Dec 16 13:05:21.129744 containerd[2529]: time="2025-12-16T13:05:21.129664772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:05:21.132381 containerd[2529]: time="2025-12-16T13:05:21.132342186Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:05:21.161655 containerd[2529]: time="2025-12-16T13:05:21.161625722Z" level=info msg="Container 0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:21.197062 containerd[2529]: time="2025-12-16T13:05:21.197018864Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389\"" Dec 16 13:05:21.197815 containerd[2529]: time="2025-12-16T13:05:21.197777708Z" level=info msg="StartContainer for \"0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389\"" Dec 16 13:05:21.200868 kubelet[3970]: E1216 13:05:21.200840 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.200868 kubelet[3970]: W1216 13:05:21.200864 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.201248 kubelet[3970]: E1216 13:05:21.201178 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.201475 kubelet[3970]: E1216 13:05:21.201463 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.201515 kubelet[3970]: W1216 13:05:21.201479 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.201515 kubelet[3970]: E1216 13:05:21.201494 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.202834 containerd[2529]: time="2025-12-16T13:05:21.202557110Z" level=info msg="connecting to shim 0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389" address="unix:///run/containerd/s/82837e815465eb95658234395a287fbc727dafd9f1d6fe2d1d8fc4dd2fe5fa70" protocol=ttrpc version=3 Dec 16 13:05:21.202906 kubelet[3970]: E1216 13:05:21.202645 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.202906 kubelet[3970]: W1216 13:05:21.202658 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.202906 kubelet[3970]: E1216 13:05:21.202675 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.203432 kubelet[3970]: E1216 13:05:21.203247 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.203432 kubelet[3970]: W1216 13:05:21.203259 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.203432 kubelet[3970]: E1216 13:05:21.203274 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.203830 kubelet[3970]: E1216 13:05:21.203672 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.203830 kubelet[3970]: W1216 13:05:21.203681 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.203830 kubelet[3970]: E1216 13:05:21.203689 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.204168 kubelet[3970]: E1216 13:05:21.204038 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.204168 kubelet[3970]: W1216 13:05:21.204045 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.204168 kubelet[3970]: E1216 13:05:21.204051 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.204374 kubelet[3970]: E1216 13:05:21.204292 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.204473 kubelet[3970]: W1216 13:05:21.204404 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.204473 kubelet[3970]: E1216 13:05:21.204415 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.204783 kubelet[3970]: E1216 13:05:21.204637 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.204783 kubelet[3970]: W1216 13:05:21.204643 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.204783 kubelet[3970]: E1216 13:05:21.204650 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.204982 kubelet[3970]: E1216 13:05:21.204901 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.205060 kubelet[3970]: W1216 13:05:21.205010 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.205060 kubelet[3970]: E1216 13:05:21.205018 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.205200 kubelet[3970]: E1216 13:05:21.205195 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.205289 kubelet[3970]: W1216 13:05:21.205235 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.205289 kubelet[3970]: E1216 13:05:21.205243 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.205495 kubelet[3970]: E1216 13:05:21.205489 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.205687 kubelet[3970]: W1216 13:05:21.205608 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.205687 kubelet[3970]: E1216 13:05:21.205619 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.208170 kubelet[3970]: E1216 13:05:21.205874 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.208170 kubelet[3970]: W1216 13:05:21.205882 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.208170 kubelet[3970]: E1216 13:05:21.205890 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.208633 kubelet[3970]: E1216 13:05:21.208452 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.208633 kubelet[3970]: W1216 13:05:21.208465 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.208633 kubelet[3970]: E1216 13:05:21.208479 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.208901 kubelet[3970]: E1216 13:05:21.208826 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.208901 kubelet[3970]: W1216 13:05:21.208834 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.208901 kubelet[3970]: E1216 13:05:21.208844 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.209241 kubelet[3970]: E1216 13:05:21.209117 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.209241 kubelet[3970]: W1216 13:05:21.209125 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.209241 kubelet[3970]: E1216 13:05:21.209134 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.221495 kubelet[3970]: E1216 13:05:21.221470 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.221495 kubelet[3970]: W1216 13:05:21.221495 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.221593 kubelet[3970]: E1216 13:05:21.221509 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.221964 kubelet[3970]: E1216 13:05:21.221950 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.221964 kubelet[3970]: W1216 13:05:21.221961 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.222046 kubelet[3970]: E1216 13:05:21.221976 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.222168 kubelet[3970]: E1216 13:05:21.222138 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.222168 kubelet[3970]: W1216 13:05:21.222167 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.222228 kubelet[3970]: E1216 13:05:21.222185 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.222332 kubelet[3970]: E1216 13:05:21.222323 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.222359 kubelet[3970]: W1216 13:05:21.222331 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.222381 kubelet[3970]: E1216 13:05:21.222361 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.222654 kubelet[3970]: E1216 13:05:21.222628 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.222654 kubelet[3970]: W1216 13:05:21.222651 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.222766 kubelet[3970]: E1216 13:05:21.222663 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.223012 kubelet[3970]: E1216 13:05:21.222991 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.223012 kubelet[3970]: W1216 13:05:21.223001 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.223115 kubelet[3970]: E1216 13:05:21.223101 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.223267 kubelet[3970]: E1216 13:05:21.223257 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.223308 kubelet[3970]: W1216 13:05:21.223267 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.223391 kubelet[3970]: E1216 13:05:21.223341 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.223564 kubelet[3970]: E1216 13:05:21.223557 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.223598 kubelet[3970]: W1216 13:05:21.223565 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.223749 kubelet[3970]: E1216 13:05:21.223737 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.224257 kubelet[3970]: E1216 13:05:21.224208 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.224257 kubelet[3970]: W1216 13:05:21.224220 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.224257 kubelet[3970]: E1216 13:05:21.224234 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.224772 kubelet[3970]: E1216 13:05:21.224750 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.224772 kubelet[3970]: W1216 13:05:21.224762 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.225007 kubelet[3970]: E1216 13:05:21.224957 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.225007 kubelet[3970]: W1216 13:05:21.224965 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.225007 kubelet[3970]: E1216 13:05:21.224975 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.225130 kubelet[3970]: E1216 13:05:21.225103 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.225130 kubelet[3970]: W1216 13:05:21.225108 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.225130 kubelet[3970]: E1216 13:05:21.225115 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.226576 kubelet[3970]: E1216 13:05:21.226269 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.226576 kubelet[3970]: W1216 13:05:21.226282 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.226576 kubelet[3970]: E1216 13:05:21.226297 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.226576 kubelet[3970]: E1216 13:05:21.226534 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.226843 kubelet[3970]: E1216 13:05:21.226817 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.226843 kubelet[3970]: W1216 13:05:21.226837 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.226934 kubelet[3970]: E1216 13:05:21.226917 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.227027 kubelet[3970]: E1216 13:05:21.226999 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.227027 kubelet[3970]: W1216 13:05:21.227022 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.227083 kubelet[3970]: E1216 13:05:21.227029 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.227150 kubelet[3970]: E1216 13:05:21.227126 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.227150 kubelet[3970]: W1216 13:05:21.227145 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.227150 kubelet[3970]: E1216 13:05:21.227152 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.227352 kubelet[3970]: E1216 13:05:21.227341 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.227352 kubelet[3970]: W1216 13:05:21.227349 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.227405 kubelet[3970]: E1216 13:05:21.227356 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.227789 kubelet[3970]: E1216 13:05:21.227761 3970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:21.227789 kubelet[3970]: W1216 13:05:21.227769 3970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:21.228042 kubelet[3970]: E1216 13:05:21.228026 3970 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:21.235356 systemd[1]: Started cri-containerd-0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389.scope - libcontainer container 0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389. Dec 16 13:05:21.279000 audit: BPF prog-id=190 op=LOAD Dec 16 13:05:21.281670 kernel: kauditd_printk_skb: 68 callbacks suppressed Dec 16 13:05:21.281775 kernel: audit: type=1334 audit(1765890321.279:588): prog-id=190 op=LOAD Dec 16 13:05:21.279000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.290313 kernel: audit: type=1300 audit(1765890321.279:588): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.296367 kernel: audit: type=1327 audit(1765890321.279:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.306238 kernel: audit: type=1334 audit(1765890321.279:589): prog-id=191 op=LOAD Dec 16 13:05:21.279000 audit: BPF prog-id=191 op=LOAD Dec 16 13:05:21.279000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.313180 kernel: audit: type=1300 audit(1765890321.279:589): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.319173 kernel: audit: type=1327 audit(1765890321.279:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.279000 audit: BPF prog-id=191 op=UNLOAD Dec 16 13:05:21.279000 audit[4651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.329250 kernel: audit: type=1334 audit(1765890321.279:590): prog-id=191 op=UNLOAD Dec 16 13:05:21.329301 kernel: audit: type=1300 audit(1765890321.279:590): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.335245 kernel: audit: type=1327 audit(1765890321.279:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.338830 kernel: audit: type=1334 audit(1765890321.279:591): prog-id=190 op=UNLOAD Dec 16 13:05:21.279000 audit: BPF prog-id=190 op=UNLOAD Dec 16 13:05:21.279000 audit[4651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.279000 audit: BPF prog-id=192 op=LOAD Dec 16 13:05:21.279000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4478 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626635633037663566363635313536393033346561316333656637 Dec 16 13:05:21.342823 containerd[2529]: time="2025-12-16T13:05:21.342749505Z" level=info msg="StartContainer for \"0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389\" returns successfully" Dec 16 13:05:21.354710 systemd[1]: cri-containerd-0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389.scope: Deactivated successfully. Dec 16 13:05:21.356000 audit: BPF prog-id=192 op=UNLOAD Dec 16 13:05:21.359309 containerd[2529]: time="2025-12-16T13:05:21.359281768Z" level=info msg="received container exit event container_id:\"0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389\" id:\"0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389\" pid:4683 exited_at:{seconds:1765890321 nanos:358876925}" Dec 16 13:05:21.379940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0abf5c07f5f6651569034ea1c3ef74caa82bda8ab88ad0b5689cc29958fa9389-rootfs.mount: Deactivated successfully. Dec 16 13:05:22.033264 kubelet[3970]: E1216 13:05:22.033151 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:22.150797 kubelet[3970]: I1216 13:05:22.150727 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d7764cb5c-95w88" podStartSLOduration=3.445007112 podStartE2EDuration="6.150704695s" podCreationTimestamp="2025-12-16 13:05:16 +0000 UTC" firstStartedPulling="2025-12-16 13:05:16.919249478 +0000 UTC m=+23.979786104" lastFinishedPulling="2025-12-16 13:05:19.624947076 +0000 UTC m=+26.685483687" observedRunningTime="2025-12-16 13:05:20.137270752 +0000 UTC m=+27.197807370" watchObservedRunningTime="2025-12-16 13:05:22.150704695 +0000 UTC m=+29.211241435" Dec 16 13:05:24.032859 kubelet[3970]: E1216 13:05:24.032782 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:24.136785 containerd[2529]: time="2025-12-16T13:05:24.136731730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:05:26.033525 kubelet[3970]: E1216 13:05:26.033467 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:27.706055 containerd[2529]: time="2025-12-16T13:05:27.705994413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.708993 containerd[2529]: time="2025-12-16T13:05:27.708959119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:05:27.712873 containerd[2529]: time="2025-12-16T13:05:27.712806654Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.720645 containerd[2529]: time="2025-12-16T13:05:27.720609917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.721572 containerd[2529]: time="2025-12-16T13:05:27.721542328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.584764552s" Dec 16 13:05:27.721656 containerd[2529]: time="2025-12-16T13:05:27.721578603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:05:27.723837 containerd[2529]: time="2025-12-16T13:05:27.723811897Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:05:27.754853 containerd[2529]: time="2025-12-16T13:05:27.754828734Z" level=info msg="Container 31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:27.775420 containerd[2529]: time="2025-12-16T13:05:27.775396940Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d\"" Dec 16 13:05:27.776352 containerd[2529]: time="2025-12-16T13:05:27.776238053Z" level=info msg="StartContainer for \"31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d\"" Dec 16 13:05:27.778974 containerd[2529]: time="2025-12-16T13:05:27.778938572Z" level=info msg="connecting to shim 31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d" address="unix:///run/containerd/s/82837e815465eb95658234395a287fbc727dafd9f1d6fe2d1d8fc4dd2fe5fa70" protocol=ttrpc version=3 Dec 16 13:05:27.800359 systemd[1]: Started cri-containerd-31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d.scope - libcontainer container 31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d. Dec 16 13:05:27.832000 audit: BPF prog-id=193 op=LOAD Dec 16 13:05:27.834823 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:05:27.834928 kernel: audit: type=1334 audit(1765890327.832:594): prog-id=193 op=LOAD Dec 16 13:05:27.832000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.840784 kernel: audit: type=1300 audit(1765890327.832:594): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.846725 kernel: audit: type=1327 audit(1765890327.832:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.832000 audit: BPF prog-id=194 op=LOAD Dec 16 13:05:27.849055 kernel: audit: type=1334 audit(1765890327.832:595): prog-id=194 op=LOAD Dec 16 13:05:27.832000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.854007 kernel: audit: type=1300 audit(1765890327.832:595): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.861085 kernel: audit: type=1327 audit(1765890327.832:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.832000 audit: BPF prog-id=194 op=UNLOAD Dec 16 13:05:27.866177 kernel: audit: type=1334 audit(1765890327.832:596): prog-id=194 op=UNLOAD Dec 16 13:05:27.832000 audit[4729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.877563 kernel: audit: type=1300 audit(1765890327.832:596): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.877631 kernel: audit: type=1327 audit(1765890327.832:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.879654 kernel: audit: type=1334 audit(1765890327.832:597): prog-id=193 op=UNLOAD Dec 16 13:05:27.832000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:05:27.832000 audit[4729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.832000 audit: BPF prog-id=195 op=LOAD Dec 16 13:05:27.832000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4478 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:27.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643030636530353739386465663261306137623136313466613966 Dec 16 13:05:27.892701 containerd[2529]: time="2025-12-16T13:05:27.892630460Z" level=info msg="StartContainer for \"31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d\" returns successfully" Dec 16 13:05:28.033339 kubelet[3970]: E1216 13:05:28.033185 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:29.022492 systemd[1]: cri-containerd-31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d.scope: Deactivated successfully. Dec 16 13:05:29.022824 systemd[1]: cri-containerd-31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d.scope: Consumed 433ms CPU time, 194.1M memory peak, 171.3M written to disk. Dec 16 13:05:29.026000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:05:29.028457 containerd[2529]: time="2025-12-16T13:05:29.028142527Z" level=info msg="received container exit event container_id:\"31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d\" id:\"31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d\" pid:4742 exited_at:{seconds:1765890329 nanos:27111710}" Dec 16 13:05:29.049655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31d00ce05798def2a0a7b1614fa9f5a65a987a75abfc1d82cbb087500c7c310d-rootfs.mount: Deactivated successfully. Dec 16 13:05:29.127177 kubelet[3970]: I1216 13:05:29.127134 3970 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:05:29.176739 systemd[1]: Created slice kubepods-burstable-podd9b60a6d_0ab6_4b06_a75c_aa98a45e065b.slice - libcontainer container kubepods-burstable-podd9b60a6d_0ab6_4b06_a75c_aa98a45e065b.slice. Dec 16 13:05:29.211151 systemd[1]: Created slice kubepods-besteffort-pod5156a21d_5db3_420c_a907_ae3cb29e174c.slice - libcontainer container kubepods-besteffort-pod5156a21d_5db3_420c_a907_ae3cb29e174c.slice. Dec 16 13:05:29.219930 systemd[1]: Created slice kubepods-besteffort-pod3ce56745_c6a8_40c1_81e7_b66ad27dd817.slice - libcontainer container kubepods-besteffort-pod3ce56745_c6a8_40c1_81e7_b66ad27dd817.slice. Dec 16 13:05:29.226962 systemd[1]: Created slice kubepods-burstable-pod550ac59b_859a_421a_a47e_2547bf61257f.slice - libcontainer container kubepods-burstable-pod550ac59b_859a_421a_a47e_2547bf61257f.slice. Dec 16 13:05:29.233952 systemd[1]: Created slice kubepods-besteffort-pod7e4e3625_dd87_4672_8c32_4a4421202c74.slice - libcontainer container kubepods-besteffort-pod7e4e3625_dd87_4672_8c32_4a4421202c74.slice. Dec 16 13:05:29.240218 systemd[1]: Created slice kubepods-besteffort-pod7ece9954_95e0_4c99_bc7f_1fc28a21ac1f.slice - libcontainer container kubepods-besteffort-pod7ece9954_95e0_4c99_bc7f_1fc28a21ac1f.slice. Dec 16 13:05:29.244246 systemd[1]: Created slice kubepods-besteffort-pod46b8f3f5_9271_4b12_86c1_faf1b8e7af82.slice - libcontainer container kubepods-besteffort-pod46b8f3f5_9271_4b12_86c1_faf1b8e7af82.slice. Dec 16 13:05:29.271396 kubelet[3970]: I1216 13:05:29.270479 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tblq\" (UniqueName: \"kubernetes.io/projected/46b8f3f5-9271-4b12-86c1-faf1b8e7af82-kube-api-access-6tblq\") pod \"calico-apiserver-56b946485d-6kjlw\" (UID: \"46b8f3f5-9271-4b12-86c1-faf1b8e7af82\") " pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:29.271396 kubelet[3970]: I1216 13:05:29.270538 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svxs\" (UniqueName: \"kubernetes.io/projected/d9b60a6d-0ab6-4b06-a75c-aa98a45e065b-kube-api-access-9svxs\") pod \"coredns-668d6bf9bc-28jk6\" (UID: \"d9b60a6d-0ab6-4b06-a75c-aa98a45e065b\") " pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:29.271396 kubelet[3970]: I1216 13:05:29.270559 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4wm\" (UniqueName: \"kubernetes.io/projected/550ac59b-859a-421a-a47e-2547bf61257f-kube-api-access-2x4wm\") pod \"coredns-668d6bf9bc-jxhjr\" (UID: \"550ac59b-859a-421a-a47e-2547bf61257f\") " pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:29.271396 kubelet[3970]: I1216 13:05:29.270670 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ece9954-95e0-4c99-bc7f-1fc28a21ac1f-goldmane-ca-bundle\") pod \"goldmane-666569f655-k2n4d\" (UID: \"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f\") " pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:29.271396 kubelet[3970]: I1216 13:05:29.270694 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzsvf\" (UniqueName: \"kubernetes.io/projected/7e4e3625-dd87-4672-8c32-4a4421202c74-kube-api-access-dzsvf\") pod \"whisker-5c8459879b-drtr9\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " pod="calico-system/whisker-5c8459879b-drtr9" Dec 16 13:05:29.271624 kubelet[3970]: I1216 13:05:29.270712 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ece9954-95e0-4c99-bc7f-1fc28a21ac1f-config\") pod \"goldmane-666569f655-k2n4d\" (UID: \"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f\") " pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:29.271624 kubelet[3970]: I1216 13:05:29.270826 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-ca-bundle\") pod \"whisker-5c8459879b-drtr9\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " pod="calico-system/whisker-5c8459879b-drtr9" Dec 16 13:05:29.271624 kubelet[3970]: I1216 13:05:29.270853 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46b8f3f5-9271-4b12-86c1-faf1b8e7af82-calico-apiserver-certs\") pod \"calico-apiserver-56b946485d-6kjlw\" (UID: \"46b8f3f5-9271-4b12-86c1-faf1b8e7af82\") " pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:29.271624 kubelet[3970]: I1216 13:05:29.270873 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce56745-c6a8-40c1-81e7-b66ad27dd817-tigera-ca-bundle\") pod \"calico-kube-controllers-8bf7db64d-zpn94\" (UID: \"3ce56745-c6a8-40c1-81e7-b66ad27dd817\") " pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" Dec 16 13:05:29.271624 kubelet[3970]: I1216 13:05:29.270985 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vvn\" (UniqueName: \"kubernetes.io/projected/3ce56745-c6a8-40c1-81e7-b66ad27dd817-kube-api-access-57vvn\") pod \"calico-kube-controllers-8bf7db64d-zpn94\" (UID: \"3ce56745-c6a8-40c1-81e7-b66ad27dd817\") " pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" Dec 16 13:05:29.271737 kubelet[3970]: I1216 13:05:29.271007 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5156a21d-5db3-420c-a907-ae3cb29e174c-calico-apiserver-certs\") pod \"calico-apiserver-56b946485d-w9zv5\" (UID: \"5156a21d-5db3-420c-a907-ae3cb29e174c\") " pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" Dec 16 13:05:29.271737 kubelet[3970]: I1216 13:05:29.271026 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8cn\" (UniqueName: \"kubernetes.io/projected/5156a21d-5db3-420c-a907-ae3cb29e174c-kube-api-access-8c8cn\") pod \"calico-apiserver-56b946485d-w9zv5\" (UID: \"5156a21d-5db3-420c-a907-ae3cb29e174c\") " pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" Dec 16 13:05:29.271737 kubelet[3970]: I1216 13:05:29.271088 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-backend-key-pair\") pod \"whisker-5c8459879b-drtr9\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " pod="calico-system/whisker-5c8459879b-drtr9" Dec 16 13:05:29.271737 kubelet[3970]: I1216 13:05:29.271108 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9b60a6d-0ab6-4b06-a75c-aa98a45e065b-config-volume\") pod \"coredns-668d6bf9bc-28jk6\" (UID: \"d9b60a6d-0ab6-4b06-a75c-aa98a45e065b\") " pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:29.271737 kubelet[3970]: I1216 13:05:29.271180 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fg6d\" (UniqueName: \"kubernetes.io/projected/7ece9954-95e0-4c99-bc7f-1fc28a21ac1f-kube-api-access-6fg6d\") pod \"goldmane-666569f655-k2n4d\" (UID: \"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f\") " pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:29.271822 kubelet[3970]: I1216 13:05:29.271242 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/550ac59b-859a-421a-a47e-2547bf61257f-config-volume\") pod \"coredns-668d6bf9bc-jxhjr\" (UID: \"550ac59b-859a-421a-a47e-2547bf61257f\") " pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:29.271822 kubelet[3970]: I1216 13:05:29.271264 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ece9954-95e0-4c99-bc7f-1fc28a21ac1f-goldmane-key-pair\") pod \"goldmane-666569f655-k2n4d\" (UID: \"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f\") " pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:29.481239 containerd[2529]: time="2025-12-16T13:05:29.481198801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:29.514886 containerd[2529]: time="2025-12-16T13:05:29.514858909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-w9zv5,Uid:5156a21d-5db3-420c-a907-ae3cb29e174c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:29.524764 containerd[2529]: time="2025-12-16T13:05:29.524730833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8bf7db64d-zpn94,Uid:3ce56745-c6a8-40c1-81e7-b66ad27dd817,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:29.530255 containerd[2529]: time="2025-12-16T13:05:29.530220492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:29.538736 containerd[2529]: time="2025-12-16T13:05:29.538711698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8459879b-drtr9,Uid:7e4e3625-dd87-4672-8c32-4a4421202c74,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:29.544222 containerd[2529]: time="2025-12-16T13:05:29.544200159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2n4d,Uid:7ece9954-95e0-4c99-bc7f-1fc28a21ac1f,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:29.546623 containerd[2529]: time="2025-12-16T13:05:29.546603814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:30.037924 systemd[1]: Created slice kubepods-besteffort-pod38439d67_e506_407a_b65d_e7dd3b4f13ff.slice - libcontainer container kubepods-besteffort-pod38439d67_e506_407a_b65d_e7dd3b4f13ff.slice. Dec 16 13:05:30.040295 containerd[2529]: time="2025-12-16T13:05:30.040238942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:31.471773 containerd[2529]: time="2025-12-16T13:05:31.471663903Z" level=error msg="Failed to destroy network for sandbox \"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.473203 containerd[2529]: time="2025-12-16T13:05:31.473110290Z" level=error msg="Failed to destroy network for sandbox \"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.482382 containerd[2529]: time="2025-12-16T13:05:31.482330969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-w9zv5,Uid:5156a21d-5db3-420c-a907-ae3cb29e174c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.482988 kubelet[3970]: E1216 13:05:31.482928 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.485084 kubelet[3970]: E1216 13:05:31.484902 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" Dec 16 13:05:31.485084 kubelet[3970]: E1216 13:05:31.484946 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" Dec 16 13:05:31.485084 kubelet[3970]: E1216 13:05:31.485029 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b808be7d8ae83a719ba15d5187ff17d2531ff3f8a5691ffb55ce76cc994fe263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:05:31.504679 containerd[2529]: time="2025-12-16T13:05:31.504634576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.505082 kubelet[3970]: E1216 13:05:31.505049 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.505183 kubelet[3970]: E1216 13:05:31.505110 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:31.505183 kubelet[3970]: E1216 13:05:31.505133 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:31.505273 kubelet[3970]: E1216 13:05:31.505188 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-28jk6_kube-system(d9b60a6d-0ab6-4b06-a75c-aa98a45e065b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-28jk6_kube-system(d9b60a6d-0ab6-4b06-a75c-aa98a45e065b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322d93e35c36db511e10e560db1c545936d53ce68579216e6aa2566699d5c392\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-28jk6" podUID="d9b60a6d-0ab6-4b06-a75c-aa98a45e065b" Dec 16 13:05:31.516960 containerd[2529]: time="2025-12-16T13:05:31.516916356Z" level=error msg="Failed to destroy network for sandbox \"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.517522 containerd[2529]: time="2025-12-16T13:05:31.517491664Z" level=error msg="Failed to destroy network for sandbox \"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.517902 containerd[2529]: time="2025-12-16T13:05:31.517825873Z" level=error msg="Failed to destroy network for sandbox \"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.519284 containerd[2529]: time="2025-12-16T13:05:31.519258921Z" level=error msg="Failed to destroy network for sandbox \"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.526270 containerd[2529]: time="2025-12-16T13:05:31.526234876Z" level=error msg="Failed to destroy network for sandbox \"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.527712 containerd[2529]: time="2025-12-16T13:05:31.527676749Z" level=error msg="Failed to destroy network for sandbox \"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.551309 containerd[2529]: time="2025-12-16T13:05:31.551271135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.551517 kubelet[3970]: E1216 13:05:31.551486 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.551598 kubelet[3970]: E1216 13:05:31.551541 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:31.551598 kubelet[3970]: E1216 13:05:31.551569 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:31.551654 kubelet[3970]: E1216 13:05:31.551616 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jxhjr_kube-system(550ac59b-859a-421a-a47e-2547bf61257f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jxhjr_kube-system(550ac59b-859a-421a-a47e-2547bf61257f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d00faf82d459e857769156a8260ef5f8e8df282445cc846c1167698790a8443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jxhjr" podUID="550ac59b-859a-421a-a47e-2547bf61257f" Dec 16 13:05:31.562988 containerd[2529]: time="2025-12-16T13:05:31.562941453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8bf7db64d-zpn94,Uid:3ce56745-c6a8-40c1-81e7-b66ad27dd817,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.563178 kubelet[3970]: E1216 13:05:31.563134 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.563260 kubelet[3970]: E1216 13:05:31.563196 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" Dec 16 13:05:31.563260 kubelet[3970]: E1216 13:05:31.563215 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" Dec 16 13:05:31.563331 kubelet[3970]: E1216 13:05:31.563254 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"424815ef000d9bdf15bd34959eaa9189f8f97f4e523dc88a3992aae324d2c9c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:05:31.572963 containerd[2529]: time="2025-12-16T13:05:31.572925052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8459879b-drtr9,Uid:7e4e3625-dd87-4672-8c32-4a4421202c74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.573116 kubelet[3970]: E1216 13:05:31.573091 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.573189 kubelet[3970]: E1216 13:05:31.573132 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8459879b-drtr9" Dec 16 13:05:31.573189 kubelet[3970]: E1216 13:05:31.573180 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8459879b-drtr9" Dec 16 13:05:31.573262 kubelet[3970]: E1216 13:05:31.573223 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c8459879b-drtr9_calico-system(7e4e3625-dd87-4672-8c32-4a4421202c74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c8459879b-drtr9_calico-system(7e4e3625-dd87-4672-8c32-4a4421202c74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"195ec3e91bc66e931378bf9851eca6c5cbc14d42864c8aa70c0d4a639df86e2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c8459879b-drtr9" podUID="7e4e3625-dd87-4672-8c32-4a4421202c74" Dec 16 13:05:31.576391 containerd[2529]: time="2025-12-16T13:05:31.576341996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2n4d,Uid:7ece9954-95e0-4c99-bc7f-1fc28a21ac1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.576551 kubelet[3970]: E1216 13:05:31.576508 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.576604 kubelet[3970]: E1216 13:05:31.576563 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:31.576604 kubelet[3970]: E1216 13:05:31.576581 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k2n4d" Dec 16 13:05:31.576653 kubelet[3970]: E1216 13:05:31.576619 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc91640c61d56b6ef66e65abfe9f275754f25f1ebca97bf58c619127e80099d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:05:31.585775 containerd[2529]: time="2025-12-16T13:05:31.585715512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.585925 kubelet[3970]: E1216 13:05:31.585888 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.586018 kubelet[3970]: E1216 13:05:31.586001 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:31.586066 kubelet[3970]: E1216 13:05:31.586022 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:31.586095 kubelet[3970]: E1216 13:05:31.586064 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b33b6ab1309bb38168b2aaef13f49c53bce5b1530440739ac0089968f8f26dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:05:31.648669 containerd[2529]: time="2025-12-16T13:05:31.648633483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.648875 kubelet[3970]: E1216 13:05:31.648852 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:31.648929 kubelet[3970]: E1216 13:05:31.648890 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:31.648959 kubelet[3970]: E1216 13:05:31.648908 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:31.648990 kubelet[3970]: E1216 13:05:31.648973 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3e175d1191b875256fb858dbd3f8c87c73b7601aaa895b4fbc130975380844c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:32.155686 containerd[2529]: time="2025-12-16T13:05:32.155619960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:05:32.226683 systemd[1]: run-netns-cni\x2d99b2e51d\x2da7b4\x2d8dfa\x2d70e8\x2d77bdb52d85de.mount: Deactivated successfully. Dec 16 13:05:32.226862 systemd[1]: run-netns-cni\x2d49f3b348\x2d88fb\x2d5003\x2d590f\x2d3338f4d9524d.mount: Deactivated successfully. Dec 16 13:05:32.226914 systemd[1]: run-netns-cni\x2da272777c\x2d3525\x2d8606\x2da4dc\x2db35ccc006a18.mount: Deactivated successfully. Dec 16 13:05:32.226961 systemd[1]: run-netns-cni\x2d5c4cc6ee\x2dd6fd\x2dbcb5\x2d9e24\x2df11af5268ea6.mount: Deactivated successfully. Dec 16 13:05:32.227014 systemd[1]: run-netns-cni\x2ddeeceaf1\x2da767\x2d4798\x2d0c32\x2dea8427296280.mount: Deactivated successfully. Dec 16 13:05:38.267218 kubelet[3970]: I1216 13:05:38.266971 3970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:38.302000 audit[5014]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:38.304625 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:05:38.304691 kernel: audit: type=1325 audit(1765890338.302:600): table=filter:120 family=2 entries=21 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:38.302000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeef7a2da0 a2=0 a3=7ffeef7a2d8c items=0 ppid=4122 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:38.315335 kernel: audit: type=1300 audit(1765890338.302:600): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeef7a2da0 a2=0 a3=7ffeef7a2d8c items=0 ppid=4122 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:38.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:38.314000 audit[5014]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:38.320028 kernel: audit: type=1327 audit(1765890338.302:600): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:38.320075 kernel: audit: type=1325 audit(1765890338.314:601): table=nat:121 family=2 entries=19 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:38.314000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffeef7a2da0 a2=0 a3=7ffeef7a2d8c items=0 ppid=4122 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:38.324785 kernel: audit: type=1300 audit(1765890338.314:601): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffeef7a2da0 a2=0 a3=7ffeef7a2d8c items=0 ppid=4122 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:38.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:38.328324 kernel: audit: type=1327 audit(1765890338.314:601): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:42.034388 containerd[2529]: time="2025-12-16T13:05:42.034334165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:42.110176 containerd[2529]: time="2025-12-16T13:05:42.109914198Z" level=error msg="Failed to destroy network for sandbox \"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:42.113368 systemd[1]: run-netns-cni\x2d4da49f1d\x2de4b9\x2d066b\x2d7d69\x2d05438a41ed98.mount: Deactivated successfully. Dec 16 13:05:42.124549 containerd[2529]: time="2025-12-16T13:05:42.124457618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:42.125425 kubelet[3970]: E1216 13:05:42.124764 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:42.125425 kubelet[3970]: E1216 13:05:42.124855 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:42.125425 kubelet[3970]: E1216 13:05:42.124883 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgr5" Dec 16 13:05:42.126255 kubelet[3970]: E1216 13:05:42.124940 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3780a90e7ec8539a3a4ee25e15806373499f07cec25b34fdde5f491d571c4adb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:43.036981 containerd[2529]: time="2025-12-16T13:05:43.036917034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:43.129590 containerd[2529]: time="2025-12-16T13:05:43.129517963Z" level=error msg="Failed to destroy network for sandbox \"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:43.133402 systemd[1]: run-netns-cni\x2d5ef552fe\x2d4cf7\x2d9fb0\x2d5462\x2daca6c0a887a0.mount: Deactivated successfully. Dec 16 13:05:43.139504 containerd[2529]: time="2025-12-16T13:05:43.139423518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:43.141275 kubelet[3970]: E1216 13:05:43.139625 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:43.141275 kubelet[3970]: E1216 13:05:43.139693 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:43.141275 kubelet[3970]: E1216 13:05:43.139715 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" Dec 16 13:05:43.141667 kubelet[3970]: E1216 13:05:43.139768 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"761a1607b2d491b90f0d2f7d119a9c1de85ab9b5dcbaaf0e61c581717eae7a03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:05:44.035188 containerd[2529]: time="2025-12-16T13:05:44.035026136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:44.036567 containerd[2529]: time="2025-12-16T13:05:44.036415151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:44.069854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966915476.mount: Deactivated successfully. Dec 16 13:05:44.133579 containerd[2529]: time="2025-12-16T13:05:44.133541791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.138696 containerd[2529]: time="2025-12-16T13:05:44.138662293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:05:44.156631 containerd[2529]: time="2025-12-16T13:05:44.156569508Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.158002 containerd[2529]: time="2025-12-16T13:05:44.157412117Z" level=error msg="Failed to destroy network for sandbox \"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.159805 systemd[1]: run-netns-cni\x2d9c66bc05\x2d6d2f\x2d05ac\x2d4b89\x2daf53220818af.mount: Deactivated successfully. Dec 16 13:05:44.163989 containerd[2529]: time="2025-12-16T13:05:44.163956354Z" level=error msg="Failed to destroy network for sandbox \"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.165622 systemd[1]: run-netns-cni\x2dbfacc2cd\x2d8924\x2dc007\x2de7b6\x2d172ecb093cd0.mount: Deactivated successfully. Dec 16 13:05:44.168016 containerd[2529]: time="2025-12-16T13:05:44.167979041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.168422 kubelet[3970]: E1216 13:05:44.168348 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.168964 kubelet[3970]: E1216 13:05:44.168711 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:44.168964 kubelet[3970]: E1216 13:05:44.168737 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28jk6" Dec 16 13:05:44.168964 kubelet[3970]: E1216 13:05:44.168785 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-28jk6_kube-system(d9b60a6d-0ab6-4b06-a75c-aa98a45e065b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-28jk6_kube-system(d9b60a6d-0ab6-4b06-a75c-aa98a45e065b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee37c94af9715f7ef285adb6de0a849f7ccb488aa9d98190d31f9c590b30b685\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-28jk6" podUID="d9b60a6d-0ab6-4b06-a75c-aa98a45e065b" Dec 16 13:05:44.174658 containerd[2529]: time="2025-12-16T13:05:44.174629492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.175001 containerd[2529]: time="2025-12-16T13:05:44.174979061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.01930019s" Dec 16 13:05:44.175042 containerd[2529]: time="2025-12-16T13:05:44.175009071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:05:44.177619 containerd[2529]: time="2025-12-16T13:05:44.177535150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.178089 kubelet[3970]: E1216 13:05:44.178052 3970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:44.178192 kubelet[3970]: E1216 13:05:44.178099 3970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:44.178235 kubelet[3970]: E1216 13:05:44.178199 3970 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jxhjr" Dec 16 13:05:44.178274 kubelet[3970]: E1216 13:05:44.178248 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jxhjr_kube-system(550ac59b-859a-421a-a47e-2547bf61257f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jxhjr_kube-system(550ac59b-859a-421a-a47e-2547bf61257f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba21b8f67323cea5ac756489062920cc0db57c67a6852eb6e5afdd5dd2b32629\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jxhjr" podUID="550ac59b-859a-421a-a47e-2547bf61257f" Dec 16 13:05:44.184715 containerd[2529]: time="2025-12-16T13:05:44.184690276Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:05:44.209924 containerd[2529]: time="2025-12-16T13:05:44.209901943Z" level=info msg="Container c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:44.231999 containerd[2529]: time="2025-12-16T13:05:44.231975761Z" level=info msg="CreateContainer within sandbox \"d564204283d4482e7278414aceaacdbb139b0dc51e0a75a5f70dc04513fdb184\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298\"" Dec 16 13:05:44.232561 containerd[2529]: time="2025-12-16T13:05:44.232421677Z" level=info msg="StartContainer for \"c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298\"" Dec 16 13:05:44.234121 containerd[2529]: time="2025-12-16T13:05:44.234092362Z" level=info msg="connecting to shim c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298" address="unix:///run/containerd/s/82837e815465eb95658234395a287fbc727dafd9f1d6fe2d1d8fc4dd2fe5fa70" protocol=ttrpc version=3 Dec 16 13:05:44.255351 systemd[1]: Started cri-containerd-c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298.scope - libcontainer container c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298. Dec 16 13:05:44.295000 audit: BPF prog-id=196 op=LOAD Dec 16 13:05:44.298171 kernel: audit: type=1334 audit(1765890344.295:602): prog-id=196 op=LOAD Dec 16 13:05:44.295000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.303181 kernel: audit: type=1300 audit(1765890344.295:602): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.310228 kernel: audit: type=1327 audit(1765890344.295:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.295000 audit: BPF prog-id=197 op=LOAD Dec 16 13:05:44.317572 kernel: audit: type=1334 audit(1765890344.295:603): prog-id=197 op=LOAD Dec 16 13:05:44.317633 kernel: audit: type=1300 audit(1765890344.295:603): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.323253 kernel: audit: type=1327 audit(1765890344.295:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.295000 audit: BPF prog-id=197 op=UNLOAD Dec 16 13:05:44.329192 kernel: audit: type=1334 audit(1765890344.295:604): prog-id=197 op=UNLOAD Dec 16 13:05:44.329276 kernel: audit: type=1300 audit(1765890344.295:604): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit[5124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.332838 kernel: audit: type=1327 audit(1765890344.295:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.295000 audit: BPF prog-id=196 op=UNLOAD Dec 16 13:05:44.338183 kernel: audit: type=1334 audit(1765890344.295:605): prog-id=196 op=UNLOAD Dec 16 13:05:44.295000 audit[5124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.295000 audit: BPF prog-id=198 op=LOAD Dec 16 13:05:44.295000 audit[5124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4478 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:44.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323135353738643231303032396637636439306538363464393134 Dec 16 13:05:44.342763 containerd[2529]: time="2025-12-16T13:05:44.341928855Z" level=info msg="StartContainer for \"c0215578d210029f7cd90e864d9140d1976d0e9fd7c9f248d2768b42ff314298\" returns successfully" Dec 16 13:05:44.710221 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:05:44.710366 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:05:44.869200 kubelet[3970]: I1216 13:05:44.868785 3970 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-backend-key-pair\") pod \"7e4e3625-dd87-4672-8c32-4a4421202c74\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " Dec 16 13:05:44.869200 kubelet[3970]: I1216 13:05:44.868834 3970 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-ca-bundle\") pod \"7e4e3625-dd87-4672-8c32-4a4421202c74\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " Dec 16 13:05:44.869200 kubelet[3970]: I1216 13:05:44.868875 3970 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzsvf\" (UniqueName: \"kubernetes.io/projected/7e4e3625-dd87-4672-8c32-4a4421202c74-kube-api-access-dzsvf\") pod \"7e4e3625-dd87-4672-8c32-4a4421202c74\" (UID: \"7e4e3625-dd87-4672-8c32-4a4421202c74\") " Dec 16 13:05:44.869783 kubelet[3970]: I1216 13:05:44.869749 3970 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7e4e3625-dd87-4672-8c32-4a4421202c74" (UID: "7e4e3625-dd87-4672-8c32-4a4421202c74"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:05:44.874887 kubelet[3970]: I1216 13:05:44.874851 3970 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7e4e3625-dd87-4672-8c32-4a4421202c74" (UID: "7e4e3625-dd87-4672-8c32-4a4421202c74"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:05:44.876296 kubelet[3970]: I1216 13:05:44.876269 3970 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4e3625-dd87-4672-8c32-4a4421202c74-kube-api-access-dzsvf" (OuterVolumeSpecName: "kube-api-access-dzsvf") pod "7e4e3625-dd87-4672-8c32-4a4421202c74" (UID: "7e4e3625-dd87-4672-8c32-4a4421202c74"). InnerVolumeSpecName "kube-api-access-dzsvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:05:44.969569 kubelet[3970]: I1216 13:05:44.969465 3970 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-bc3c22631a\" DevicePath \"\"" Dec 16 13:05:44.969569 kubelet[3970]: I1216 13:05:44.969492 3970 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e4e3625-dd87-4672-8c32-4a4421202c74-whisker-ca-bundle\") on node \"ci-4515.1.0-a-bc3c22631a\" DevicePath \"\"" Dec 16 13:05:44.969569 kubelet[3970]: I1216 13:05:44.969503 3970 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzsvf\" (UniqueName: \"kubernetes.io/projected/7e4e3625-dd87-4672-8c32-4a4421202c74-kube-api-access-dzsvf\") on node \"ci-4515.1.0-a-bc3c22631a\" DevicePath \"\"" Dec 16 13:05:45.034515 containerd[2529]: time="2025-12-16T13:05:45.034187409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-w9zv5,Uid:5156a21d-5db3-420c-a907-ae3cb29e174c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:45.039401 systemd[1]: Removed slice kubepods-besteffort-pod7e4e3625_dd87_4672_8c32_4a4421202c74.slice - libcontainer container kubepods-besteffort-pod7e4e3625_dd87_4672_8c32_4a4421202c74.slice. Dec 16 13:05:45.064046 systemd[1]: var-lib-kubelet-pods-7e4e3625\x2ddd87\x2d4672\x2d8c32\x2d4a4421202c74-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:05:45.065853 systemd[1]: var-lib-kubelet-pods-7e4e3625\x2ddd87\x2d4672\x2d8c32\x2d4a4421202c74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzsvf.mount: Deactivated successfully. Dec 16 13:05:45.148538 systemd-networkd[2157]: califcb39332d20: Link UP Dec 16 13:05:45.149689 systemd-networkd[2157]: califcb39332d20: Gained carrier Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.077 [INFO][5186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.086 [INFO][5186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0 calico-apiserver-56b946485d- calico-apiserver 5156a21d-5db3-420c-a907-ae3cb29e174c 834 0 2025-12-16 13:05:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56b946485d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a calico-apiserver-56b946485d-w9zv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcb39332d20 [] [] }} ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.086 [INFO][5186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.109 [INFO][5199] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" HandleID="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.109 [INFO][5199] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" HandleID="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"calico-apiserver-56b946485d-w9zv5", "timestamp":"2025-12-16 13:05:45.109497013 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.109 [INFO][5199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.109 [INFO][5199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.109 [INFO][5199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.114 [INFO][5199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.118 [INFO][5199] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.120 [INFO][5199] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.122 [INFO][5199] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.123 [INFO][5199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.123 [INFO][5199] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.124 [INFO][5199] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.128 [INFO][5199] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.137 [INFO][5199] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.193/26] block=192.168.32.192/26 handle="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.137 [INFO][5199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.193/26] handle="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.137 [INFO][5199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:45.166261 containerd[2529]: 2025-12-16 13:05:45.137 [INFO][5199] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.193/26] IPv6=[] ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" HandleID="k8s-pod-network.1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.141 [INFO][5186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0", GenerateName:"calico-apiserver-56b946485d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5156a21d-5db3-420c-a907-ae3cb29e174c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b946485d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"calico-apiserver-56b946485d-w9zv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb39332d20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.141 [INFO][5186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.193/32] ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.141 [INFO][5186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcb39332d20 ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.150 [INFO][5186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.150 [INFO][5186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0", GenerateName:"calico-apiserver-56b946485d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5156a21d-5db3-420c-a907-ae3cb29e174c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b946485d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e", Pod:"calico-apiserver-56b946485d-w9zv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb39332d20", MAC:"5a:33:bf:30:0c:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:45.167382 containerd[2529]: 2025-12-16 13:05:45.163 [INFO][5186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-w9zv5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--w9zv5-eth0" Dec 16 13:05:45.205209 kubelet[3970]: I1216 13:05:45.205136 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sghrm" podStartSLOduration=1.96267449 podStartE2EDuration="29.205117659s" podCreationTimestamp="2025-12-16 13:05:16 +0000 UTC" firstStartedPulling="2025-12-16 13:05:16.933395277 +0000 UTC m=+23.993931904" lastFinishedPulling="2025-12-16 13:05:44.175838452 +0000 UTC m=+51.236375073" observedRunningTime="2025-12-16 13:05:45.205011809 +0000 UTC m=+52.265548425" watchObservedRunningTime="2025-12-16 13:05:45.205117659 +0000 UTC m=+52.265654279" Dec 16 13:05:45.293019 systemd[1]: Created slice kubepods-besteffort-podd76f74ff_aa1c_40a0_a82a_3e2f5f1611ac.slice - libcontainer container kubepods-besteffort-podd76f74ff_aa1c_40a0_a82a_3e2f5f1611ac.slice. Dec 16 13:05:45.538549 kubelet[3970]: I1216 13:05:45.372356 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac-whisker-backend-key-pair\") pod \"whisker-b7d557678-kmhrd\" (UID: \"d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac\") " pod="calico-system/whisker-b7d557678-kmhrd" Dec 16 13:05:45.538549 kubelet[3970]: I1216 13:05:45.372393 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848zf\" (UniqueName: \"kubernetes.io/projected/d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac-kube-api-access-848zf\") pod \"whisker-b7d557678-kmhrd\" (UID: \"d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac\") " pod="calico-system/whisker-b7d557678-kmhrd" Dec 16 13:05:45.538549 kubelet[3970]: I1216 13:05:45.372427 3970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac-whisker-ca-bundle\") pod \"whisker-b7d557678-kmhrd\" (UID: \"d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac\") " pod="calico-system/whisker-b7d557678-kmhrd" Dec 16 13:05:45.597512 containerd[2529]: time="2025-12-16T13:05:45.597385805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7d557678-kmhrd,Uid:d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:46.034364 containerd[2529]: time="2025-12-16T13:05:46.034307143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2n4d,Uid:7ece9954-95e0-4c99-bc7f-1fc28a21ac1f,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:46.035676 containerd[2529]: time="2025-12-16T13:05:46.035611240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8bf7db64d-zpn94,Uid:3ce56745-c6a8-40c1-81e7-b66ad27dd817,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:46.378060 containerd[2529]: time="2025-12-16T13:05:46.377829574Z" level=info msg="connecting to shim 1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e" address="unix:///run/containerd/s/559e45b7e8101b47aa833bcb644e993a68e1a5fb5ae8e4032cf10f72a47b42e5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:46.457458 systemd[1]: Started cri-containerd-1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e.scope - libcontainer container 1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e. Dec 16 13:05:46.507000 audit: BPF prog-id=199 op=LOAD Dec 16 13:05:46.511000 audit: BPF prog-id=200 op=LOAD Dec 16 13:05:46.511000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.511000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:05:46.511000 audit[5443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.512000 audit: BPF prog-id=201 op=LOAD Dec 16 13:05:46.512000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.512000 audit: BPF prog-id=202 op=LOAD Dec 16 13:05:46.512000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.512000 audit: BPF prog-id=202 op=UNLOAD Dec 16 13:05:46.512000 audit[5443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.512000 audit: BPF prog-id=201 op=UNLOAD Dec 16 13:05:46.512000 audit[5443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.512000 audit: BPF prog-id=203 op=LOAD Dec 16 13:05:46.512000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=5431 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323461383531373231643334656436386335373063373935383965 Dec 16 13:05:46.531000 audit: BPF prog-id=204 op=LOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1199e30 a2=98 a3=1fffffffffffffff items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.531000 audit: BPF prog-id=204 op=UNLOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb1199e00 a3=0 items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.531000 audit: BPF prog-id=205 op=LOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1199d10 a2=94 a3=3 items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.531000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb1199d10 a2=94 a3=3 items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.531000 audit: BPF prog-id=206 op=LOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1199d50 a2=94 a3=7ffeb1199f30 items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.531000 audit: BPF prog-id=206 op=UNLOAD Dec 16 13:05:46.531000 audit[5497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb1199d50 a2=94 a3=7ffeb1199f30 items=0 ppid=5288 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.531000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:46.545000 audit: BPF prog-id=207 op=LOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff16daefd0 a2=98 a3=3 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.545000 audit: BPF prog-id=207 op=UNLOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff16daefa0 a3=0 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.545000 audit: BPF prog-id=208 op=LOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff16daedc0 a2=94 a3=54428f items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.545000 audit: BPF prog-id=208 op=UNLOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff16daedc0 a2=94 a3=54428f items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.545000 audit: BPF prog-id=209 op=LOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff16daedf0 a2=94 a3=2 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.545000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:05:46.545000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff16daedf0 a2=0 a3=2 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.556374 systemd-networkd[2157]: califcb39332d20: Gained IPv6LL Dec 16 13:05:46.603142 containerd[2529]: time="2025-12-16T13:05:46.603040406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-w9zv5,Uid:5156a21d-5db3-420c-a907-ae3cb29e174c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b24a851721d34ed68c570c79589e782b6de56a76e97f26af560215373374a1e\"" Dec 16 13:05:46.607172 containerd[2529]: time="2025-12-16T13:05:46.607099133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:05:46.633934 systemd-networkd[2157]: cali662b269a68c: Link UP Dec 16 13:05:46.636630 systemd-networkd[2157]: cali662b269a68c: Gained carrier Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.480 [INFO][5373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0 whisker-b7d557678- calico-system d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac 931 0 2025-12-16 13:05:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b7d557678 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a whisker-b7d557678-kmhrd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali662b269a68c [] [] }} ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.480 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.566 [INFO][5481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" HandleID="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Workload="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.566 [INFO][5481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" HandleID="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Workload="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"whisker-b7d557678-kmhrd", "timestamp":"2025-12-16 13:05:46.566757956 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.569 [INFO][5481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.569 [INFO][5481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.569 [INFO][5481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.580 [INFO][5481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.585 [INFO][5481] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.593 [INFO][5481] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.596 [INFO][5481] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.598 [INFO][5481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.598 [INFO][5481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.600 [INFO][5481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097 Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.605 [INFO][5481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.619 [INFO][5481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.194/26] block=192.168.32.192/26 handle="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.619 [INFO][5481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.194/26] handle="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.619 [INFO][5481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:46.653793 containerd[2529]: 2025-12-16 13:05:46.619 [INFO][5481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.194/26] IPv6=[] ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" HandleID="k8s-pod-network.32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Workload="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.623 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0", GenerateName:"whisker-b7d557678-", Namespace:"calico-system", SelfLink:"", UID:"d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7d557678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"whisker-b7d557678-kmhrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali662b269a68c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.623 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.194/32] ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.623 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali662b269a68c ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.639 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.639 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0", GenerateName:"whisker-b7d557678-", Namespace:"calico-system", SelfLink:"", UID:"d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7d557678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097", Pod:"whisker-b7d557678-kmhrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali662b269a68c", MAC:"be:72:1a:12:9c:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.655699 containerd[2529]: 2025-12-16 13:05:46.651 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" Namespace="calico-system" Pod="whisker-b7d557678-kmhrd" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-whisker--b7d557678--kmhrd-eth0" Dec 16 13:05:46.736343 systemd-networkd[2157]: caliceecda4a3aa: Link UP Dec 16 13:05:46.736533 systemd-networkd[2157]: caliceecda4a3aa: Gained carrier Dec 16 13:05:46.747494 containerd[2529]: time="2025-12-16T13:05:46.747456205Z" level=info msg="connecting to shim 32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097" address="unix:///run/containerd/s/92fdac1907a1919cc207ad55e63233da967f498cdccd30c46aa6fa8006cc9360" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.477 [INFO][5397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0 calico-kube-controllers-8bf7db64d- calico-system 3ce56745-c6a8-40c1-81e7-b66ad27dd817 836 0 2025-12-16 13:05:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8bf7db64d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a calico-kube-controllers-8bf7db64d-zpn94 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliceecda4a3aa [] [] }} ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.477 [INFO][5397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.579 [INFO][5479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" HandleID="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.579 [INFO][5479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" HandleID="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"calico-kube-controllers-8bf7db64d-zpn94", "timestamp":"2025-12-16 13:05:46.57949917 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.579 [INFO][5479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.621 [INFO][5479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.621 [INFO][5479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.680 [INFO][5479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.684 [INFO][5479] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.690 [INFO][5479] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.692 [INFO][5479] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.694 [INFO][5479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.701 [INFO][5479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.702 [INFO][5479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6 Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.707 [INFO][5479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.716 [INFO][5479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.195/26] block=192.168.32.192/26 handle="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.716 [INFO][5479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.195/26] handle="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.716 [INFO][5479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:46.764269 containerd[2529]: 2025-12-16 13:05:46.716 [INFO][5479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.195/26] IPv6=[] ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" HandleID="k8s-pod-network.13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.720 [INFO][5397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0", GenerateName:"calico-kube-controllers-8bf7db64d-", Namespace:"calico-system", SelfLink:"", UID:"3ce56745-c6a8-40c1-81e7-b66ad27dd817", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8bf7db64d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"calico-kube-controllers-8bf7db64d-zpn94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceecda4a3aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.720 [INFO][5397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.195/32] ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.720 [INFO][5397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceecda4a3aa ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.734 [INFO][5397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.735 [INFO][5397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0", GenerateName:"calico-kube-controllers-8bf7db64d-", Namespace:"calico-system", SelfLink:"", UID:"3ce56745-c6a8-40c1-81e7-b66ad27dd817", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8bf7db64d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6", Pod:"calico-kube-controllers-8bf7db64d-zpn94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceecda4a3aa", MAC:"de:c1:44:85:82:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.764886 containerd[2529]: 2025-12-16 13:05:46.759 [INFO][5397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" Namespace="calico-system" Pod="calico-kube-controllers-8bf7db64d-zpn94" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--kube--controllers--8bf7db64d--zpn94-eth0" Dec 16 13:05:46.788414 systemd[1]: Started cri-containerd-32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097.scope - libcontainer container 32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097. Dec 16 13:05:46.813000 audit: BPF prog-id=210 op=LOAD Dec 16 13:05:46.814000 audit: BPF prog-id=211 op=LOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=211 op=UNLOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=212 op=LOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=213 op=LOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=212 op=UNLOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.814000 audit: BPF prog-id=214 op=LOAD Dec 16 13:05:46.814000 audit[5544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5532 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332343737666464393139333232323466343837376139343638653666 Dec 16 13:05:46.831549 containerd[2529]: time="2025-12-16T13:05:46.831514550Z" level=info msg="connecting to shim 13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6" address="unix:///run/containerd/s/c3f41eb8751f64e2c1af7c9cb160f2e7e2b6c9d6d390d95a103b481ead6f310d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:46.838257 systemd-networkd[2157]: califa64793118c: Link UP Dec 16 13:05:46.840225 systemd-networkd[2157]: califa64793118c: Gained carrier Dec 16 13:05:46.874000 audit: BPF prog-id=215 op=LOAD Dec 16 13:05:46.874000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff16daecb0 a2=94 a3=1 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.874000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:05:46.874000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff16daecb0 a2=94 a3=1 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.486 [INFO][5388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0 goldmane-666569f655- calico-system 7ece9954-95e0-4c99-bc7f-1fc28a21ac1f 838 0 2025-12-16 13:05:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a goldmane-666569f655-k2n4d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califa64793118c [] [] }} ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.486 [INFO][5388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.584 [INFO][5490] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" HandleID="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Workload="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.584 [INFO][5490] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" HandleID="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Workload="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"goldmane-666569f655-k2n4d", "timestamp":"2025-12-16 13:05:46.583997311 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.584 [INFO][5490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.716 [INFO][5490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.717 [INFO][5490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.782 [INFO][5490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.788 [INFO][5490] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.794 [INFO][5490] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.797 [INFO][5490] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.799 [INFO][5490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.801 [INFO][5490] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.802 [INFO][5490] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.809 [INFO][5490] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.827 [INFO][5490] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.196/26] block=192.168.32.192/26 handle="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.827 [INFO][5490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.196/26] handle="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.827 [INFO][5490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:46.875454 containerd[2529]: 2025-12-16 13:05:46.827 [INFO][5490] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.196/26] IPv6=[] ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" HandleID="k8s-pod-network.990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Workload="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.831 [INFO][5388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"goldmane-666569f655-k2n4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa64793118c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.831 [INFO][5388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.196/32] ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.831 [INFO][5388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa64793118c ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.839 [INFO][5388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.841 [INFO][5388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ece9954-95e0-4c99-bc7f-1fc28a21ac1f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda", Pod:"goldmane-666569f655-k2n4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa64793118c", MAC:"da:aa:bd:d8:60:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.878719 containerd[2529]: 2025-12-16 13:05:46.868 [INFO][5388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" Namespace="calico-system" Pod="goldmane-666569f655-k2n4d" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-goldmane--666569f655--k2n4d-eth0" Dec 16 13:05:46.882519 systemd[1]: Started cri-containerd-13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6.scope - libcontainer container 13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6. Dec 16 13:05:46.888294 containerd[2529]: time="2025-12-16T13:05:46.887977944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:46.896000 audit: BPF prog-id=216 op=LOAD Dec 16 13:05:46.896000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff16daeca0 a2=94 a3=4 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.896000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.896000 audit: BPF prog-id=216 op=UNLOAD Dec 16 13:05:46.896000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff16daeca0 a2=0 a3=4 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.896000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.897000 audit: BPF prog-id=217 op=LOAD Dec 16 13:05:46.897000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff16daeb00 a2=94 a3=5 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.897000 audit: BPF prog-id=217 op=UNLOAD Dec 16 13:05:46.897000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff16daeb00 a2=0 a3=5 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.897000 audit: BPF prog-id=218 op=LOAD Dec 16 13:05:46.897000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff16daed20 a2=94 a3=6 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.897000 audit: BPF prog-id=218 op=UNLOAD Dec 16 13:05:46.897000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff16daed20 a2=0 a3=6 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.897000 audit: BPF prog-id=219 op=LOAD Dec 16 13:05:46.897000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff16dae4d0 a2=94 a3=88 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.898000 audit: BPF prog-id=220 op=LOAD Dec 16 13:05:46.898000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff16dae350 a2=94 a3=2 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.898000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:05:46.898000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff16dae380 a2=0 a3=7fff16dae480 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.898000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:05:46.898000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=8d36d10 a2=0 a3=1747c24e6abf6112 items=0 ppid=5288 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:46.899810 containerd[2529]: time="2025-12-16T13:05:46.898042399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:05:46.899810 containerd[2529]: time="2025-12-16T13:05:46.898137484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:46.899876 kubelet[3970]: E1216 13:05:46.898799 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:46.899876 kubelet[3970]: E1216 13:05:46.898864 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:46.901372 kubelet[3970]: E1216 13:05:46.899066 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c8cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:46.901372 kubelet[3970]: E1216 13:05:46.900815 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:05:46.912000 audit: BPF prog-id=221 op=LOAD Dec 16 13:05:46.913000 audit: BPF prog-id=222 op=LOAD Dec 16 13:05:46.913000 audit[5589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.913000 audit: BPF prog-id=222 op=UNLOAD Dec 16 13:05:46.913000 audit[5589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.914000 audit: BPF prog-id=223 op=LOAD Dec 16 13:05:46.914000 audit[5589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.914000 audit: BPF prog-id=224 op=LOAD Dec 16 13:05:46.914000 audit[5589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.915000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:05:46.915000 audit[5589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.915000 audit: BPF prog-id=223 op=UNLOAD Dec 16 13:05:46.915000 audit[5589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.915000 audit: BPF prog-id=225 op=LOAD Dec 16 13:05:46.915000 audit[5589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5576 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623363303537343338643634616665353338306361366565353130 Dec 16 13:05:46.917000 audit: BPF prog-id=226 op=LOAD Dec 16 13:05:46.917000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff31e40df0 a2=98 a3=1999999999999999 items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.917000 audit: BPF prog-id=226 op=UNLOAD Dec 16 13:05:46.917000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff31e40dc0 a3=0 items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.918000 audit: BPF prog-id=227 op=LOAD Dec 16 13:05:46.918000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff31e40cd0 a2=94 a3=ffff items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.918000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.918000 audit: BPF prog-id=227 op=UNLOAD Dec 16 13:05:46.918000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff31e40cd0 a2=94 a3=ffff items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.918000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.918000 audit: BPF prog-id=228 op=LOAD Dec 16 13:05:46.918000 audit[5621]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff31e40d10 a2=94 a3=7fff31e40ef0 items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.918000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.918000 audit: BPF prog-id=228 op=UNLOAD Dec 16 13:05:46.918000 audit[5621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff31e40d10 a2=94 a3=7fff31e40ef0 items=0 ppid=5288 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.918000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:46.943943 containerd[2529]: time="2025-12-16T13:05:46.943864353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7d557678-kmhrd,Uid:d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"32477fdd91932224f4877a9468e6f2988d866fbddfb72ffa196bbe550d297097\"" Dec 16 13:05:46.953255 containerd[2529]: time="2025-12-16T13:05:46.952571579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:05:46.964470 containerd[2529]: time="2025-12-16T13:05:46.964439906Z" level=info msg="connecting to shim 990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda" address="unix:///run/containerd/s/7316e96ce0b89dc992adcca88b4001354bc78892d52d40df396342725b78f57c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:46.989538 containerd[2529]: time="2025-12-16T13:05:46.989491418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8bf7db64d-zpn94,Uid:3ce56745-c6a8-40c1-81e7-b66ad27dd817,Namespace:calico-system,Attempt:0,} returns sandbox id \"13b3c057438d64afe5380ca6ee510b13894e20c991e7263e83a44e7197d949b6\"" Dec 16 13:05:47.001353 systemd[1]: Started cri-containerd-990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda.scope - libcontainer container 990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda. Dec 16 13:05:47.036309 kubelet[3970]: I1216 13:05:47.036271 3970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4e3625-dd87-4672-8c32-4a4421202c74" path="/var/lib/kubelet/pods/7e4e3625-dd87-4672-8c32-4a4421202c74/volumes" Dec 16 13:05:47.052000 audit: BPF prog-id=229 op=LOAD Dec 16 13:05:47.053000 audit: BPF prog-id=230 op=LOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=231 op=LOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=232 op=LOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=232 op=UNLOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=231 op=UNLOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.053000 audit: BPF prog-id=233 op=LOAD Dec 16 13:05:47.053000 audit[5658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5642 pid=5658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306230396436306364656433623236313634323862653066346637 Dec 16 13:05:47.114359 containerd[2529]: time="2025-12-16T13:05:47.114333664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2n4d,Uid:7ece9954-95e0-4c99-bc7f-1fc28a21ac1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"990b09d60cded3b2616428be0f4f7c9cfa096c0fbde80d57e5bffde86f313cda\"" Dec 16 13:05:47.182116 systemd-networkd[2157]: vxlan.calico: Link UP Dec 16 13:05:47.184647 systemd-networkd[2157]: vxlan.calico: Gained carrier Dec 16 13:05:47.199184 kubelet[3970]: E1216 13:05:47.198253 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:05:47.207000 audit: BPF prog-id=234 op=LOAD Dec 16 13:05:47.207000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8b1a5e30 a2=98 a3=0 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8b1a5e00 a3=0 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=235 op=LOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8b1a5c40 a2=94 a3=54428f items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=235 op=UNLOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8b1a5c40 a2=94 a3=54428f items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=236 op=LOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8b1a5c70 a2=94 a3=2 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=236 op=UNLOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8b1a5c70 a2=0 a3=2 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=237 op=LOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8b1a5a20 a2=94 a3=4 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=237 op=UNLOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8b1a5a20 a2=94 a3=4 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=238 op=LOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8b1a5b20 a2=94 a3=7fff8b1a5ca0 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.208000 audit: BPF prog-id=238 op=UNLOAD Dec 16 13:05:47.208000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8b1a5b20 a2=0 a3=7fff8b1a5ca0 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.209000 audit: BPF prog-id=239 op=LOAD Dec 16 13:05:47.209000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8b1a5250 a2=94 a3=2 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.209000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.209000 audit: BPF prog-id=239 op=UNLOAD Dec 16 13:05:47.209000 audit[5699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8b1a5250 a2=0 a3=2 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.209000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.209000 audit: BPF prog-id=240 op=LOAD Dec 16 13:05:47.209000 audit[5699]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8b1a5350 a2=94 a3=30 items=0 ppid=5288 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.209000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:47.215000 audit: BPF prog-id=241 op=LOAD Dec 16 13:05:47.215000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd130c7720 a2=98 a3=0 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.215000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.215000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:05:47.215000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd130c76f0 a3=0 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.215000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.216000 audit: BPF prog-id=242 op=LOAD Dec 16 13:05:47.216000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd130c7510 a2=94 a3=54428f items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.216000 audit: BPF prog-id=242 op=UNLOAD Dec 16 13:05:47.216000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd130c7510 a2=94 a3=54428f items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.216000 audit: BPF prog-id=243 op=LOAD Dec 16 13:05:47.216000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd130c7540 a2=94 a3=2 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.216000 audit: BPF prog-id=243 op=UNLOAD Dec 16 13:05:47.216000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd130c7540 a2=0 a3=2 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.224476 containerd[2529]: time="2025-12-16T13:05:47.224040294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:47.227295 containerd[2529]: time="2025-12-16T13:05:47.227250271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:05:47.227435 containerd[2529]: time="2025-12-16T13:05:47.227321266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:47.227556 kubelet[3970]: E1216 13:05:47.227446 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:47.227556 kubelet[3970]: E1216 13:05:47.227488 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:47.228363 kubelet[3970]: E1216 13:05:47.227814 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7d3f55cc7f4131bcc3c705ab88d498,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:47.230184 containerd[2529]: time="2025-12-16T13:05:47.229650357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:05:47.255000 audit[5709]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.255000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc2b5f440 a2=0 a3=7ffdc2b5f42c items=0 ppid=4122 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:47.260000 audit[5709]: NETFILTER_CFG table=nat:123 family=2 entries=14 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.260000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc2b5f440 a2=0 a3=0 items=0 ppid=4122 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:47.368000 audit: BPF prog-id=244 op=LOAD Dec 16 13:05:47.368000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd130c7400 a2=94 a3=1 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.368000 audit: BPF prog-id=244 op=UNLOAD Dec 16 13:05:47.368000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd130c7400 a2=94 a3=1 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.378000 audit: BPF prog-id=245 op=LOAD Dec 16 13:05:47.378000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd130c73f0 a2=94 a3=4 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.378000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.378000 audit: BPF prog-id=245 op=UNLOAD Dec 16 13:05:47.378000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd130c73f0 a2=0 a3=4 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.378000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.379000 audit: BPF prog-id=246 op=LOAD Dec 16 13:05:47.379000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd130c7250 a2=94 a3=5 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.379000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:05:47.379000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd130c7250 a2=0 a3=5 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.379000 audit: BPF prog-id=247 op=LOAD Dec 16 13:05:47.379000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd130c7470 a2=94 a3=6 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.379000 audit: BPF prog-id=247 op=UNLOAD Dec 16 13:05:47.379000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd130c7470 a2=0 a3=6 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.380000 audit: BPF prog-id=248 op=LOAD Dec 16 13:05:47.380000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd130c6c20 a2=94 a3=88 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.381000 audit: BPF prog-id=249 op=LOAD Dec 16 13:05:47.381000 audit[5703]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd130c6aa0 a2=94 a3=2 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.381000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.381000 audit: BPF prog-id=249 op=UNLOAD Dec 16 13:05:47.381000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd130c6ad0 a2=0 a3=7ffd130c6bd0 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.381000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.381000 audit: BPF prog-id=248 op=UNLOAD Dec 16 13:05:47.381000 audit[5703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=24e47d10 a2=0 a3=a21ab28220ee00e0 items=0 ppid=5288 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.381000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:47.387000 audit: BPF prog-id=240 op=UNLOAD Dec 16 13:05:47.387000 audit[5288]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00080bc00 a2=0 a3=0 items=0 ppid=5257 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.387000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:05:47.497000 audit[5734]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:47.497000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd3387f6a0 a2=0 a3=7ffd3387f68c items=0 ppid=5288 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.497000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:47.499390 containerd[2529]: time="2025-12-16T13:05:47.499150988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:47.503069 containerd[2529]: time="2025-12-16T13:05:47.502957177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:05:47.503069 containerd[2529]: time="2025-12-16T13:05:47.502997694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:47.503258 kubelet[3970]: E1216 13:05:47.503214 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:05:47.503322 kubelet[3970]: E1216 13:05:47.503270 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:05:47.503593 kubelet[3970]: E1216 13:05:47.503547 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57vvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:47.504099 containerd[2529]: time="2025-12-16T13:05:47.504071734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:05:47.505567 kubelet[3970]: E1216 13:05:47.505527 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:05:47.507000 audit[5736]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:47.507000 audit[5736]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc677d3df0 a2=0 a3=7ffc677d3ddc items=0 ppid=5288 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.507000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:47.529000 audit[5735]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5735 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:47.529000 audit[5735]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc61e64f70 a2=0 a3=7ffc61e64f5c items=0 ppid=5288 pid=5735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.529000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:47.541000 audit[5738]: NETFILTER_CFG table=filter:127 family=2 entries=206 op=nft_register_chain pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:47.541000 audit[5738]: SYSCALL arch=c000003e syscall=46 success=yes exit=120232 a0=3 a1=7ffd2c276870 a2=0 a3=7ffd2c27685c items=0 ppid=5288 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.541000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:47.708373 systemd-networkd[2157]: cali662b269a68c: Gained IPv6LL Dec 16 13:05:47.781953 containerd[2529]: time="2025-12-16T13:05:47.781858189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:47.788482 containerd[2529]: time="2025-12-16T13:05:47.788439270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:05:47.788576 containerd[2529]: time="2025-12-16T13:05:47.788533840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:47.788732 kubelet[3970]: E1216 13:05:47.788697 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:47.788794 kubelet[3970]: E1216 13:05:47.788746 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:47.789171 containerd[2529]: time="2025-12-16T13:05:47.789120465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:05:47.789933 kubelet[3970]: E1216 13:05:47.789878 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:47.791101 kubelet[3970]: E1216 13:05:47.791039 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:05:48.063858 containerd[2529]: time="2025-12-16T13:05:48.063709765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:48.075615 containerd[2529]: time="2025-12-16T13:05:48.075567786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:05:48.075760 containerd[2529]: time="2025-12-16T13:05:48.075679532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:48.075883 kubelet[3970]: E1216 13:05:48.075840 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:48.076282 kubelet[3970]: E1216 13:05:48.075900 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:48.076282 kubelet[3970]: E1216 13:05:48.076066 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:48.077383 kubelet[3970]: E1216 13:05:48.077310 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:05:48.197333 kubelet[3970]: E1216 13:05:48.196934 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:05:48.197333 kubelet[3970]: E1216 13:05:48.197048 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:05:48.198326 kubelet[3970]: E1216 13:05:48.198286 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:05:48.198594 kubelet[3970]: E1216 13:05:48.198376 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:05:48.266000 audit[5749]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:48.266000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6083ead0 a2=0 a3=7ffd6083eabc items=0 ppid=4122 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:48.272000 audit[5749]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:48.272000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd6083ead0 a2=0 a3=0 items=0 ppid=4122 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:48.283000 audit[5751]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:48.283000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce66e84b0 a2=0 a3=7ffce66e849c items=0 ppid=4122 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:48.287000 audit[5751]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:48.287000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce66e84b0 a2=0 a3=0 items=0 ppid=4122 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:48.668408 systemd-networkd[2157]: califa64793118c: Gained IPv6LL Dec 16 13:05:48.733299 systemd-networkd[2157]: caliceecda4a3aa: Gained IPv6LL Dec 16 13:05:48.734112 systemd-networkd[2157]: vxlan.calico: Gained IPv6LL Dec 16 13:05:55.033694 containerd[2529]: time="2025-12-16T13:05:55.033255210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:55.159214 systemd-networkd[2157]: cali5d4ad02c6f9: Link UP Dec 16 13:05:55.159496 systemd-networkd[2157]: cali5d4ad02c6f9: Gained carrier Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.081 [INFO][5763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0 calico-apiserver-56b946485d- calico-apiserver 46b8f3f5-9271-4b12-86c1-faf1b8e7af82 840 0 2025-12-16 13:05:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56b946485d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a calico-apiserver-56b946485d-6kjlw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d4ad02c6f9 [] [] }} ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.081 [INFO][5763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.106 [INFO][5775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" HandleID="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.107 [INFO][5775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" HandleID="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"calico-apiserver-56b946485d-6kjlw", "timestamp":"2025-12-16 13:05:55.106908748 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.107 [INFO][5775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.107 [INFO][5775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.107 [INFO][5775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.111 [INFO][5775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.115 [INFO][5775] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.119 [INFO][5775] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.120 [INFO][5775] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.122 [INFO][5775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.122 [INFO][5775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.123 [INFO][5775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.132 [INFO][5775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.153 [INFO][5775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.197/26] block=192.168.32.192/26 handle="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.153 [INFO][5775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.197/26] handle="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.153 [INFO][5775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:55.176646 containerd[2529]: 2025-12-16 13:05:55.153 [INFO][5775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.197/26] IPv6=[] ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" HandleID="k8s-pod-network.8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Workload="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.154 [INFO][5763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0", GenerateName:"calico-apiserver-56b946485d-", Namespace:"calico-apiserver", SelfLink:"", UID:"46b8f3f5-9271-4b12-86c1-faf1b8e7af82", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b946485d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"calico-apiserver-56b946485d-6kjlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4ad02c6f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.155 [INFO][5763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.197/32] ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.155 [INFO][5763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d4ad02c6f9 ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.157 [INFO][5763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.158 [INFO][5763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0", GenerateName:"calico-apiserver-56b946485d-", Namespace:"calico-apiserver", SelfLink:"", UID:"46b8f3f5-9271-4b12-86c1-faf1b8e7af82", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b946485d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e", Pod:"calico-apiserver-56b946485d-6kjlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d4ad02c6f9", MAC:"0a:aa:2f:d7:ad:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:55.178200 containerd[2529]: 2025-12-16 13:05:55.173 [INFO][5763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" Namespace="calico-apiserver" Pod="calico-apiserver-56b946485d-6kjlw" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-calico--apiserver--56b946485d--6kjlw-eth0" Dec 16 13:05:55.189000 audit[5791]: NETFILTER_CFG table=filter:132 family=2 entries=45 op=nft_register_chain pid=5791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:55.191573 kernel: kauditd_printk_skb: 309 callbacks suppressed Dec 16 13:05:55.191642 kernel: audit: type=1325 audit(1765890355.189:711): table=filter:132 family=2 entries=45 op=nft_register_chain pid=5791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:55.189000 audit[5791]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7ffddb962420 a2=0 a3=7ffddb96240c items=0 ppid=5288 pid=5791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.199929 kernel: audit: type=1300 audit(1765890355.189:711): arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7ffddb962420 a2=0 a3=7ffddb96240c items=0 ppid=5288 pid=5791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:55.203174 kernel: audit: type=1327 audit(1765890355.189:711): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:55.249671 containerd[2529]: time="2025-12-16T13:05:55.249577921Z" level=info msg="connecting to shim 8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e" address="unix:///run/containerd/s/9828df8bb442062e9ee56138da2b43e96df35aba83ab223b57a2a5d7e1be2f93" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:55.277390 systemd[1]: Started cri-containerd-8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e.scope - libcontainer container 8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e. Dec 16 13:05:55.285000 audit: BPF prog-id=250 op=LOAD Dec 16 13:05:55.290727 kernel: audit: type=1334 audit(1765890355.285:712): prog-id=250 op=LOAD Dec 16 13:05:55.290793 kernel: audit: type=1334 audit(1765890355.287:713): prog-id=251 op=LOAD Dec 16 13:05:55.287000 audit: BPF prog-id=251 op=LOAD Dec 16 13:05:55.298470 kernel: audit: type=1300 audit(1765890355.287:713): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.303379 kernel: audit: type=1327 audit(1765890355.287:713): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.305064 kernel: audit: type=1334 audit(1765890355.287:714): prog-id=251 op=UNLOAD Dec 16 13:05:55.287000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.315250 kernel: audit: type=1300 audit(1765890355.287:714): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.315306 kernel: audit: type=1327 audit(1765890355.287:714): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: BPF prog-id=252 op=LOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: BPF prog-id=253 op=LOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: BPF prog-id=253 op=UNLOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: BPF prog-id=252 op=UNLOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.287000 audit: BPF prog-id=254 op=LOAD Dec 16 13:05:55.287000 audit[5813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5801 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:55.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303763363836366630316165643133393936346664623839303063 Dec 16 13:05:55.337475 containerd[2529]: time="2025-12-16T13:05:55.337375757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b946485d-6kjlw,Uid:46b8f3f5-9271-4b12-86c1-faf1b8e7af82,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8f07c6866f01aed139964fdb8900c5aa39105674ad407ded607fff65a8b5b65e\"" Dec 16 13:05:55.339604 containerd[2529]: time="2025-12-16T13:05:55.339403210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:05:55.623198 containerd[2529]: time="2025-12-16T13:05:55.623130019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:55.626307 containerd[2529]: time="2025-12-16T13:05:55.626259719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:05:55.626307 containerd[2529]: time="2025-12-16T13:05:55.626287519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:55.626600 kubelet[3970]: E1216 13:05:55.626554 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:55.627025 kubelet[3970]: E1216 13:05:55.626623 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:55.627025 kubelet[3970]: E1216 13:05:55.626802 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:55.628333 kubelet[3970]: E1216 13:05:55.628258 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:05:56.212121 kubelet[3970]: E1216 13:05:56.212066 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:05:56.245000 audit[5839]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:56.245000 audit[5839]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb139f9e0 a2=0 a3=7ffdb139f9cc items=0 ppid=4122 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:56.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:56.249000 audit[5839]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:56.249000 audit[5839]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb139f9e0 a2=0 a3=0 items=0 ppid=4122 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:56.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:56.860539 systemd-networkd[2157]: cali5d4ad02c6f9: Gained IPv6LL Dec 16 13:05:57.034191 containerd[2529]: time="2025-12-16T13:05:57.033779381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:57.141054 systemd-networkd[2157]: cali9e609b5ea34: Link UP Dec 16 13:05:57.141919 systemd-networkd[2157]: cali9e609b5ea34: Gained carrier Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.077 [INFO][5841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0 csi-node-driver- calico-system 38439d67-e506-407a-b65d-e7dd3b4f13ff 720 0 2025-12-16 13:05:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a csi-node-driver-swgr5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9e609b5ea34 [] [] }} ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.077 [INFO][5841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.099 [INFO][5852] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" HandleID="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Workload="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.099 [INFO][5852] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" HandleID="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Workload="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"csi-node-driver-swgr5", "timestamp":"2025-12-16 13:05:57.099197662 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.099 [INFO][5852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.099 [INFO][5852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.099 [INFO][5852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.104 [INFO][5852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.108 [INFO][5852] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.111 [INFO][5852] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.112 [INFO][5852] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.114 [INFO][5852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.115 [INFO][5852] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.117 [INFO][5852] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667 Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.122 [INFO][5852] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.133 [INFO][5852] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.198/26] block=192.168.32.192/26 handle="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.133 [INFO][5852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.198/26] handle="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.133 [INFO][5852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:57.171397 containerd[2529]: 2025-12-16 13:05:57.133 [INFO][5852] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.198/26] IPv6=[] ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" HandleID="k8s-pod-network.17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Workload="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.136 [INFO][5841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38439d67-e506-407a-b65d-e7dd3b4f13ff", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"csi-node-driver-swgr5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e609b5ea34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.136 [INFO][5841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.198/32] ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.136 [INFO][5841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e609b5ea34 ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.141 [INFO][5841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.142 [INFO][5841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38439d67-e506-407a-b65d-e7dd3b4f13ff", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667", Pod:"csi-node-driver-swgr5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e609b5ea34", MAC:"02:bb:12:67:55:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:57.172017 containerd[2529]: 2025-12-16 13:05:57.168 [INFO][5841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" Namespace="calico-system" Pod="csi-node-driver-swgr5" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-csi--node--driver--swgr5-eth0" Dec 16 13:05:57.185000 audit[5866]: NETFILTER_CFG table=filter:135 family=2 entries=48 op=nft_register_chain pid=5866 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:57.185000 audit[5866]: SYSCALL arch=c000003e syscall=46 success=yes exit=23124 a0=3 a1=7ffc9b1e7f60 a2=0 a3=7ffc9b1e7f4c items=0 ppid=5288 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:57.217769 kubelet[3970]: E1216 13:05:57.217366 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:05:57.221852 containerd[2529]: time="2025-12-16T13:05:57.221778473Z" level=info msg="connecting to shim 17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667" address="unix:///run/containerd/s/ecac811bfdfc09b026d1db3cb5863e2418ad89fa9a09c54d43329b5d972f23e1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:57.254348 systemd[1]: Started cri-containerd-17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667.scope - libcontainer container 17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667. Dec 16 13:05:57.261000 audit: BPF prog-id=255 op=LOAD Dec 16 13:05:57.262000 audit: BPF prog-id=256 op=LOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=257 op=LOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=258 op=LOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=258 op=UNLOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=257 op=UNLOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.262000 audit: BPF prog-id=259 op=LOAD Dec 16 13:05:57.262000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5875 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:57.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653132343466626263633830393161313631613138323036313738 Dec 16 13:05:57.279115 containerd[2529]: time="2025-12-16T13:05:57.279090679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgr5,Uid:38439d67-e506-407a-b65d-e7dd3b4f13ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"17e1244fbbcc8091a161a1820617821ffb5a9ebd8a0acaef049adca6336d0667\"" Dec 16 13:05:57.281640 containerd[2529]: time="2025-12-16T13:05:57.281089726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:05:57.555475 containerd[2529]: time="2025-12-16T13:05:57.555356410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:57.559306 containerd[2529]: time="2025-12-16T13:05:57.559263089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:57.559399 containerd[2529]: time="2025-12-16T13:05:57.559266163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:05:57.559588 kubelet[3970]: E1216 13:05:57.559549 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:05:57.559661 kubelet[3970]: E1216 13:05:57.559603 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:05:57.559821 kubelet[3970]: E1216 13:05:57.559788 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:57.563191 containerd[2529]: time="2025-12-16T13:05:57.562307372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:05:57.841411 containerd[2529]: time="2025-12-16T13:05:57.841260023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:57.844343 containerd[2529]: time="2025-12-16T13:05:57.844226516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:05:57.844343 containerd[2529]: time="2025-12-16T13:05:57.844259756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:57.844525 kubelet[3970]: E1216 13:05:57.844489 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:05:57.844591 kubelet[3970]: E1216 13:05:57.844541 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:05:57.844759 kubelet[3970]: E1216 13:05:57.844716 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:57.846166 kubelet[3970]: E1216 13:05:57.846105 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:58.219522 kubelet[3970]: E1216 13:05:58.219437 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:59.035197 containerd[2529]: time="2025-12-16T13:05:59.034497257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:59.036019 containerd[2529]: time="2025-12-16T13:05:59.035069961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:59.036980 containerd[2529]: time="2025-12-16T13:05:59.036758390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:05:59.164437 systemd-networkd[2157]: cali9e609b5ea34: Gained IPv6LL Dec 16 13:05:59.185110 systemd-networkd[2157]: cali85bc35eb885: Link UP Dec 16 13:05:59.186032 systemd-networkd[2157]: cali85bc35eb885: Gained carrier Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.104 [INFO][5924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0 coredns-668d6bf9bc- kube-system d9b60a6d-0ab6-4b06-a75c-aa98a45e065b 828 0 2025-12-16 13:04:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a coredns-668d6bf9bc-28jk6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85bc35eb885 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.104 [INFO][5924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.137 [INFO][5945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" HandleID="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.137 [INFO][5945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" HandleID="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"coredns-668d6bf9bc-28jk6", "timestamp":"2025-12-16 13:05:59.137649132 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.138 [INFO][5945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.138 [INFO][5945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.138 [INFO][5945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.143 [INFO][5945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.147 [INFO][5945] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.151 [INFO][5945] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.153 [INFO][5945] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.155 [INFO][5945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.155 [INFO][5945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.156 [INFO][5945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.164 [INFO][5945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.178 [INFO][5945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.199/26] block=192.168.32.192/26 handle="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.179 [INFO][5945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.199/26] handle="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.179 [INFO][5945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:59.210925 containerd[2529]: 2025-12-16 13:05:59.179 [INFO][5945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.199/26] IPv6=[] ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" HandleID="k8s-pod-network.63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.212597 containerd[2529]: 2025-12-16 13:05:59.182 [INFO][5924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d9b60a6d-0ab6-4b06-a75c-aa98a45e065b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"coredns-668d6bf9bc-28jk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85bc35eb885", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:59.212597 containerd[2529]: 2025-12-16 13:05:59.182 [INFO][5924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.199/32] ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.212597 containerd[2529]: 2025-12-16 13:05:59.182 [INFO][5924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85bc35eb885 ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.212597 containerd[2529]: 2025-12-16 13:05:59.186 [INFO][5924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.212597 containerd[2529]: 2025-12-16 13:05:59.187 [INFO][5924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d9b60a6d-0ab6-4b06-a75c-aa98a45e065b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c", Pod:"coredns-668d6bf9bc-28jk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85bc35eb885", MAC:"b6:d2:04:d4:9e:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:59.212803 containerd[2529]: 2025-12-16 13:05:59.209 [INFO][5924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" Namespace="kube-system" Pod="coredns-668d6bf9bc-28jk6" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--28jk6-eth0" Dec 16 13:05:59.221974 kubelet[3970]: E1216 13:05:59.221891 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:05:59.247000 audit[5969]: NETFILTER_CFG table=filter:136 family=2 entries=58 op=nft_register_chain pid=5969 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:59.247000 audit[5969]: SYSCALL arch=c000003e syscall=46 success=yes exit=27288 a0=3 a1=7fff75b522b0 a2=0 a3=7fff75b5229c items=0 ppid=5288 pid=5969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.247000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:59.271906 containerd[2529]: time="2025-12-16T13:05:59.271848703Z" level=info msg="connecting to shim 63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c" address="unix:///run/containerd/s/688c289c29daef2faf3f07dc8fcb95fb364f091bf5da760fcb4e43b6203d00fd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:59.294814 systemd-networkd[2157]: calidb42d1077e7: Link UP Dec 16 13:05:59.296409 systemd-networkd[2157]: calidb42d1077e7: Gained carrier Dec 16 13:05:59.312784 systemd[1]: Started cri-containerd-63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c.scope - libcontainer container 63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c. Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.133 [INFO][5935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0 coredns-668d6bf9bc- kube-system 550ac59b-859a-421a-a47e-2547bf61257f 837 0 2025-12-16 13:04:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-bc3c22631a coredns-668d6bf9bc-jxhjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb42d1077e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.133 [INFO][5935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.170 [INFO][5953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" HandleID="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.170 [INFO][5953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" HandleID="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-bc3c22631a", "pod":"coredns-668d6bf9bc-jxhjr", "timestamp":"2025-12-16 13:05:59.170475924 +0000 UTC"}, Hostname:"ci-4515.1.0-a-bc3c22631a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.170 [INFO][5953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.179 [INFO][5953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.179 [INFO][5953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-bc3c22631a' Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.244 [INFO][5953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.251 [INFO][5953] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.261 [INFO][5953] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.263 [INFO][5953] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.266 [INFO][5953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.266 [INFO][5953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.268 [INFO][5953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588 Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.274 [INFO][5953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.290 [INFO][5953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.200/26] block=192.168.32.192/26 handle="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.290 [INFO][5953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.200/26] handle="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" host="ci-4515.1.0-a-bc3c22631a" Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.290 [INFO][5953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:59.318255 containerd[2529]: 2025-12-16 13:05:59.290 [INFO][5953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.200/26] IPv6=[] ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" HandleID="k8s-pod-network.f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Workload="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318680 containerd[2529]: 2025-12-16 13:05:59.292 [INFO][5935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"550ac59b-859a-421a-a47e-2547bf61257f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"", Pod:"coredns-668d6bf9bc-jxhjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb42d1077e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:59.318680 containerd[2529]: 2025-12-16 13:05:59.292 [INFO][5935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.200/32] ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318680 containerd[2529]: 2025-12-16 13:05:59.292 [INFO][5935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb42d1077e7 ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318680 containerd[2529]: 2025-12-16 13:05:59.296 [INFO][5935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.318680 containerd[2529]: 2025-12-16 13:05:59.297 [INFO][5935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"550ac59b-859a-421a-a47e-2547bf61257f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-bc3c22631a", ContainerID:"f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588", Pod:"coredns-668d6bf9bc-jxhjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb42d1077e7", MAC:"4a:ce:a0:11:f0:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:59.318842 containerd[2529]: 2025-12-16 13:05:59.312 [INFO][5935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" Namespace="kube-system" Pod="coredns-668d6bf9bc-jxhjr" WorkloadEndpoint="ci--4515.1.0--a--bc3c22631a-k8s-coredns--668d6bf9bc--jxhjr-eth0" Dec 16 13:05:59.321393 containerd[2529]: time="2025-12-16T13:05:59.321360238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:59.324992 containerd[2529]: time="2025-12-16T13:05:59.324950389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:05:59.325393 containerd[2529]: time="2025-12-16T13:05:59.325356536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:59.325544 kubelet[3970]: E1216 13:05:59.325498 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:59.325662 kubelet[3970]: E1216 13:05:59.325645 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:59.326032 kubelet[3970]: E1216 13:05:59.325984 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:59.327928 kubelet[3970]: E1216 13:05:59.327886 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:05:59.328000 audit: BPF prog-id=260 op=LOAD Dec 16 13:05:59.329000 audit: BPF prog-id=261 op=LOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=262 op=LOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=263 op=LOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=263 op=UNLOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=262 op=UNLOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.329000 audit: BPF prog-id=264 op=LOAD Dec 16 13:05:59.329000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5979 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653464666530653664633362643239393637633133383837633132 Dec 16 13:05:59.346000 audit[6019]: NETFILTER_CFG table=filter:137 family=2 entries=52 op=nft_register_chain pid=6019 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:59.346000 audit[6019]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffdd4af4f70 a2=0 a3=7ffdd4af4f5c items=0 ppid=5288 pid=6019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.346000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:59.369788 containerd[2529]: time="2025-12-16T13:05:59.369726650Z" level=info msg="connecting to shim f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588" address="unix:///run/containerd/s/43a4aef5d1259f57a6dd7fd4197d251ccaedb6da753ca701f1e54f4b9880a65d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:59.383270 containerd[2529]: time="2025-12-16T13:05:59.383230439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28jk6,Uid:d9b60a6d-0ab6-4b06-a75c-aa98a45e065b,Namespace:kube-system,Attempt:0,} returns sandbox id \"63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c\"" Dec 16 13:05:59.388087 containerd[2529]: time="2025-12-16T13:05:59.388025523Z" level=info msg="CreateContainer within sandbox \"63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:59.396467 systemd[1]: Started cri-containerd-f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588.scope - libcontainer container f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588. Dec 16 13:05:59.404000 audit: BPF prog-id=265 op=LOAD Dec 16 13:05:59.404000 audit: BPF prog-id=266 op=LOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=267 op=LOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=268 op=LOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=268 op=UNLOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.404000 audit: BPF prog-id=269 op=LOAD Dec 16 13:05:59.404000 audit[6045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6029 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634323932656230343838376636323635373065626265303431653936 Dec 16 13:05:59.410227 containerd[2529]: time="2025-12-16T13:05:59.410132967Z" level=info msg="Container 08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:59.431586 containerd[2529]: time="2025-12-16T13:05:59.431564148Z" level=info msg="CreateContainer within sandbox \"63e4dfe0e6dc3bd29967c13887c128021c6e41345efe4638da96ffd11fa8293c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b\"" Dec 16 13:05:59.433185 containerd[2529]: time="2025-12-16T13:05:59.432336745Z" level=info msg="StartContainer for \"08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b\"" Dec 16 13:05:59.434283 containerd[2529]: time="2025-12-16T13:05:59.433821157Z" level=info msg="connecting to shim 08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b" address="unix:///run/containerd/s/688c289c29daef2faf3f07dc8fcb95fb364f091bf5da760fcb4e43b6203d00fd" protocol=ttrpc version=3 Dec 16 13:05:59.445402 containerd[2529]: time="2025-12-16T13:05:59.445378604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxhjr,Uid:550ac59b-859a-421a-a47e-2547bf61257f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588\"" Dec 16 13:05:59.449112 containerd[2529]: time="2025-12-16T13:05:59.449084825Z" level=info msg="CreateContainer within sandbox \"f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:59.456321 systemd[1]: Started cri-containerd-08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b.scope - libcontainer container 08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b. Dec 16 13:05:59.467000 audit: BPF prog-id=270 op=LOAD Dec 16 13:05:59.467000 audit: BPF prog-id=271 op=LOAD Dec 16 13:05:59.467000 audit[6071]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=271 op=UNLOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=272 op=LOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=273 op=LOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=273 op=UNLOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=272 op=UNLOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.468000 audit: BPF prog-id=274 op=LOAD Dec 16 13:05:59.468000 audit[6071]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5979 pid=6071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038633531653433303136313030666538383131346237333365363934 Dec 16 13:05:59.478697 containerd[2529]: time="2025-12-16T13:05:59.478121052Z" level=info msg="Container 130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:59.489317 containerd[2529]: time="2025-12-16T13:05:59.489243264Z" level=info msg="StartContainer for \"08c51e43016100fe88114b733e694caf138d489eb357f573d9f29570ba432c2b\" returns successfully" Dec 16 13:05:59.494906 containerd[2529]: time="2025-12-16T13:05:59.494876830Z" level=info msg="CreateContainer within sandbox \"f4292eb04887f626570ebbe041e96ff4b6388d66dee0c5c038085c477774a588\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00\"" Dec 16 13:05:59.496409 containerd[2529]: time="2025-12-16T13:05:59.495530296Z" level=info msg="StartContainer for \"130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00\"" Dec 16 13:05:59.497213 containerd[2529]: time="2025-12-16T13:05:59.497187424Z" level=info msg="connecting to shim 130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00" address="unix:///run/containerd/s/43a4aef5d1259f57a6dd7fd4197d251ccaedb6da753ca701f1e54f4b9880a65d" protocol=ttrpc version=3 Dec 16 13:05:59.515338 systemd[1]: Started cri-containerd-130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00.scope - libcontainer container 130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00. Dec 16 13:05:59.529000 audit: BPF prog-id=275 op=LOAD Dec 16 13:05:59.530000 audit: BPF prog-id=276 op=LOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=276 op=UNLOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=277 op=LOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=278 op=LOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=278 op=UNLOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=277 op=UNLOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.530000 audit: BPF prog-id=279 op=LOAD Dec 16 13:05:59.530000 audit[6103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=6029 pid=6103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:59.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303634366332313733396466346162633232313939363331396335 Dec 16 13:05:59.555951 containerd[2529]: time="2025-12-16T13:05:59.555878513Z" level=info msg="StartContainer for \"130646c21739df4abc221996319c5b12c9ef1b69a9a7f767b569745221d91e00\" returns successfully" Dec 16 13:06:00.034496 containerd[2529]: time="2025-12-16T13:06:00.034110973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:00.245853 kubelet[3970]: I1216 13:06:00.245788 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jxhjr" podStartSLOduration=64.24576726 podStartE2EDuration="1m4.24576726s" podCreationTimestamp="2025-12-16 13:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:00.245331957 +0000 UTC m=+67.305868575" watchObservedRunningTime="2025-12-16 13:06:00.24576726 +0000 UTC m=+67.306303881" Dec 16 13:06:00.263694 kubelet[3970]: I1216 13:06:00.263479 3970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-28jk6" podStartSLOduration=64.263462728 podStartE2EDuration="1m4.263462728s" podCreationTimestamp="2025-12-16 13:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:00.263081211 +0000 UTC m=+67.323617828" watchObservedRunningTime="2025-12-16 13:06:00.263462728 +0000 UTC m=+67.323999346" Dec 16 13:06:00.270000 audit[6139]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.273236 kernel: kauditd_printk_skb: 140 callbacks suppressed Dec 16 13:06:00.273312 kernel: audit: type=1325 audit(1765890360.270:765): table=filter:138 family=2 entries=20 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.279175 kernel: audit: type=1300 audit(1765890360.270:765): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef31421f0 a2=0 a3=7ffef31421dc items=0 ppid=4122 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.270000 audit[6139]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef31421f0 a2=0 a3=7ffef31421dc items=0 ppid=4122 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.290341 kernel: audit: type=1327 audit(1765890360.270:765): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.294192 kernel: audit: type=1325 audit(1765890360.286:766): table=nat:139 family=2 entries=14 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.286000 audit[6139]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=6139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.286000 audit[6139]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef31421f0 a2=0 a3=0 items=0 ppid=4122 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.304005 kernel: audit: type=1300 audit(1765890360.286:766): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef31421f0 a2=0 a3=0 items=0 ppid=4122 pid=6139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.308181 kernel: audit: type=1327 audit(1765890360.286:766): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.311000 audit[6141]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.316414 kernel: audit: type=1325 audit(1765890360.311:767): table=filter:140 family=2 entries=17 op=nft_register_rule pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.316464 kernel: audit: type=1300 audit(1765890360.311:767): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4ad34e70 a2=0 a3=7ffc4ad34e5c items=0 ppid=4122 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.311000 audit[6141]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4ad34e70 a2=0 a3=7ffc4ad34e5c items=0 ppid=4122 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.316690 containerd[2529]: time="2025-12-16T13:06:00.316199249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:00.324048 kernel: audit: type=1327 audit(1765890360.311:767): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.324124 containerd[2529]: time="2025-12-16T13:06:00.323220793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:00.324124 containerd[2529]: time="2025-12-16T13:06:00.323316850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:00.324222 kubelet[3970]: E1216 13:06:00.323483 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:00.324222 kubelet[3970]: E1216 13:06:00.323521 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:00.324222 kubelet[3970]: E1216 13:06:00.323729 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7d3f55cc7f4131bcc3c705ab88d498,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:00.319000 audit[6141]: NETFILTER_CFG table=nat:141 family=2 entries=35 op=nft_register_chain pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.327174 kernel: audit: type=1325 audit(1765890360.319:768): table=nat:141 family=2 entries=35 op=nft_register_chain pid=6141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:00.327200 containerd[2529]: time="2025-12-16T13:06:00.326363189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:00.319000 audit[6141]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc4ad34e70 a2=0 a3=7ffc4ad34e5c items=0 ppid=4122 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:00.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:00.597813 containerd[2529]: time="2025-12-16T13:06:00.597655525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:00.600860 containerd[2529]: time="2025-12-16T13:06:00.600818601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:00.601027 containerd[2529]: time="2025-12-16T13:06:00.600842075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:00.601214 kubelet[3970]: E1216 13:06:00.601174 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:00.601287 kubelet[3970]: E1216 13:06:00.601247 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:00.601538 kubelet[3970]: E1216 13:06:00.601502 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57vvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:00.602852 containerd[2529]: time="2025-12-16T13:06:00.602816743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:00.603373 kubelet[3970]: E1216 13:06:00.603328 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:06:00.636320 systemd-networkd[2157]: cali85bc35eb885: Gained IPv6LL Dec 16 13:06:00.887281 containerd[2529]: time="2025-12-16T13:06:00.887232546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:00.890505 containerd[2529]: time="2025-12-16T13:06:00.890458698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:00.890596 containerd[2529]: time="2025-12-16T13:06:00.890458232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:00.890753 kubelet[3970]: E1216 13:06:00.890705 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:00.890854 kubelet[3970]: E1216 13:06:00.890762 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:00.890952 kubelet[3970]: E1216 13:06:00.890920 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:00.892258 kubelet[3970]: E1216 13:06:00.892205 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:06:01.084380 systemd-networkd[2157]: calidb42d1077e7: Gained IPv6LL Dec 16 13:06:01.341000 audit[6143]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=6143 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:01.341000 audit[6143]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7d9c05b0 a2=0 a3=7fff7d9c059c items=0 ppid=4122 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:01.368000 audit[6143]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=6143 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:06:01.368000 audit[6143]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff7d9c05b0 a2=0 a3=7fff7d9c059c items=0 ppid=4122 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:01.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:06:03.034886 containerd[2529]: time="2025-12-16T13:06:03.034673336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:03.318866 containerd[2529]: time="2025-12-16T13:06:03.318652558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:03.321920 containerd[2529]: time="2025-12-16T13:06:03.321890989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:03.322033 containerd[2529]: time="2025-12-16T13:06:03.321962164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:03.322210 kubelet[3970]: E1216 13:06:03.322133 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:03.322647 kubelet[3970]: E1216 13:06:03.322227 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:03.322647 kubelet[3970]: E1216 13:06:03.322406 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c8cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:03.324040 kubelet[3970]: E1216 13:06:03.323993 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:06:09.036344 containerd[2529]: time="2025-12-16T13:06:09.036277150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:09.306782 containerd[2529]: time="2025-12-16T13:06:09.306614199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:09.310559 containerd[2529]: time="2025-12-16T13:06:09.310526911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:09.310753 containerd[2529]: time="2025-12-16T13:06:09.310615771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:09.310805 kubelet[3970]: E1216 13:06:09.310761 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:09.311200 kubelet[3970]: E1216 13:06:09.310822 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:09.311200 kubelet[3970]: E1216 13:06:09.310990 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:09.312239 kubelet[3970]: E1216 13:06:09.312180 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:06:11.034732 kubelet[3970]: E1216 13:06:11.034669 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:06:12.036490 containerd[2529]: time="2025-12-16T13:06:12.036413083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:12.038661 kubelet[3970]: E1216 13:06:12.038562 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:06:12.323438 containerd[2529]: time="2025-12-16T13:06:12.323252649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:12.327427 containerd[2529]: time="2025-12-16T13:06:12.327371236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:12.327545 containerd[2529]: time="2025-12-16T13:06:12.327377982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:12.327657 kubelet[3970]: E1216 13:06:12.327617 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:12.327716 kubelet[3970]: E1216 13:06:12.327671 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:12.327947 kubelet[3970]: E1216 13:06:12.327878 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:12.330281 containerd[2529]: time="2025-12-16T13:06:12.330233654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:12.611053 containerd[2529]: time="2025-12-16T13:06:12.610716754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:12.614643 containerd[2529]: time="2025-12-16T13:06:12.614578696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:12.614805 containerd[2529]: time="2025-12-16T13:06:12.614596028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:12.614932 kubelet[3970]: E1216 13:06:12.614877 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:12.614988 kubelet[3970]: E1216 13:06:12.614948 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:12.615142 kubelet[3970]: E1216 13:06:12.615112 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:12.616475 kubelet[3970]: E1216 13:06:12.616366 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:06:14.033969 kubelet[3970]: E1216 13:06:14.033905 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:06:18.035099 kubelet[3970]: E1216 13:06:18.034346 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:06:24.035625 kubelet[3970]: E1216 13:06:24.035285 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:06:25.038186 containerd[2529]: time="2025-12-16T13:06:25.037515239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:25.315293 containerd[2529]: time="2025-12-16T13:06:25.315076107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:25.330300 containerd[2529]: time="2025-12-16T13:06:25.330243402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:25.330449 containerd[2529]: time="2025-12-16T13:06:25.330366922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:25.330739 kubelet[3970]: E1216 13:06:25.330686 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:25.331147 kubelet[3970]: E1216 13:06:25.330759 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:25.331147 kubelet[3970]: E1216 13:06:25.331044 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7d3f55cc7f4131bcc3c705ab88d498,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:25.332615 containerd[2529]: time="2025-12-16T13:06:25.332581156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:25.608181 containerd[2529]: time="2025-12-16T13:06:25.607999681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:25.611775 containerd[2529]: time="2025-12-16T13:06:25.611620267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:25.611775 containerd[2529]: time="2025-12-16T13:06:25.611742379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:25.612860 kubelet[3970]: E1216 13:06:25.612803 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:25.612987 kubelet[3970]: E1216 13:06:25.612882 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:25.613311 kubelet[3970]: E1216 13:06:25.613269 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57vvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:25.613528 containerd[2529]: time="2025-12-16T13:06:25.613435379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:25.614942 kubelet[3970]: E1216 13:06:25.614871 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:06:25.886597 containerd[2529]: time="2025-12-16T13:06:25.884840037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:25.888627 containerd[2529]: time="2025-12-16T13:06:25.888445085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:25.888627 containerd[2529]: time="2025-12-16T13:06:25.888563397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:25.889013 kubelet[3970]: E1216 13:06:25.888968 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:25.889089 kubelet[3970]: E1216 13:06:25.889039 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:25.889273 kubelet[3970]: E1216 13:06:25.889210 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:25.890462 kubelet[3970]: E1216 13:06:25.890373 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:06:26.034278 containerd[2529]: time="2025-12-16T13:06:26.033959898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:26.313858 containerd[2529]: time="2025-12-16T13:06:26.313599221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:26.317440 containerd[2529]: time="2025-12-16T13:06:26.317370270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:26.317440 containerd[2529]: time="2025-12-16T13:06:26.317411511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:26.318034 kubelet[3970]: E1216 13:06:26.317620 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:26.318034 kubelet[3970]: E1216 13:06:26.317676 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:26.318034 kubelet[3970]: E1216 13:06:26.317873 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:26.319425 kubelet[3970]: E1216 13:06:26.319370 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:06:28.034575 kubelet[3970]: E1216 13:06:28.034502 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:06:30.036438 containerd[2529]: time="2025-12-16T13:06:30.036376698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:30.325526 containerd[2529]: time="2025-12-16T13:06:30.325189779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:30.335389 containerd[2529]: time="2025-12-16T13:06:30.335238556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:30.335389 containerd[2529]: time="2025-12-16T13:06:30.335349058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:30.335790 kubelet[3970]: E1216 13:06:30.335746 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:30.336885 kubelet[3970]: E1216 13:06:30.335903 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:30.336885 kubelet[3970]: E1216 13:06:30.336428 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c8cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:30.338263 kubelet[3970]: E1216 13:06:30.338208 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:06:36.034902 containerd[2529]: time="2025-12-16T13:06:36.034737545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:36.346464 containerd[2529]: time="2025-12-16T13:06:36.346287119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:36.349571 containerd[2529]: time="2025-12-16T13:06:36.349517810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:36.349777 containerd[2529]: time="2025-12-16T13:06:36.349548293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:36.350052 kubelet[3970]: E1216 13:06:36.349743 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:36.350553 kubelet[3970]: E1216 13:06:36.350073 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:36.350553 kubelet[3970]: E1216 13:06:36.350288 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:36.351575 kubelet[3970]: E1216 13:06:36.351525 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:06:36.372204 update_engine[2510]: I20251216 13:06:36.371273 2510 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:06:36.372204 update_engine[2510]: I20251216 13:06:36.371335 2510 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:06:36.372204 update_engine[2510]: I20251216 13:06:36.371533 2510 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374441 2510 omaha_request_params.cc:62] Current group set to beta Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374612 2510 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374620 2510 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374644 2510 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374696 2510 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374761 2510 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374768 2510 omaha_request_action.cc:272] Request: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: Dec 16 13:06:36.375307 update_engine[2510]: I20251216 13:06:36.374775 2510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:06:36.375683 locksmithd[2602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:06:36.378177 update_engine[2510]: I20251216 13:06:36.376758 2510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:06:36.379030 update_engine[2510]: I20251216 13:06:36.379004 2510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:06:36.449175 update_engine[2510]: E20251216 13:06:36.449115 2510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:06:36.449339 update_engine[2510]: I20251216 13:06:36.449324 2510 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:06:37.035219 kubelet[3970]: E1216 13:06:37.034916 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:06:39.035513 containerd[2529]: time="2025-12-16T13:06:39.035453210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:39.461417 containerd[2529]: time="2025-12-16T13:06:39.461188352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:39.464511 containerd[2529]: time="2025-12-16T13:06:39.464353712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:39.464511 containerd[2529]: time="2025-12-16T13:06:39.464478615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:39.464929 kubelet[3970]: E1216 13:06:39.464878 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:39.466058 kubelet[3970]: E1216 13:06:39.465359 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:39.466058 kubelet[3970]: E1216 13:06:39.465579 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:39.469185 containerd[2529]: time="2025-12-16T13:06:39.468753191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:39.737535 containerd[2529]: time="2025-12-16T13:06:39.736804518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:39.741302 containerd[2529]: time="2025-12-16T13:06:39.741131111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:39.741302 containerd[2529]: time="2025-12-16T13:06:39.741269409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:39.741782 kubelet[3970]: E1216 13:06:39.741724 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:39.743965 kubelet[3970]: E1216 13:06:39.741857 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:39.744386 kubelet[3970]: E1216 13:06:39.744340 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:39.746285 kubelet[3970]: E1216 13:06:39.746226 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:06:39.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.32:22-10.200.16.10:58492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:39.761609 systemd[1]: Started sshd@7-10.200.4.32:22-10.200.16.10:58492.service - OpenSSH per-connection server daemon (10.200.16.10:58492). Dec 16 13:06:39.762692 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 13:06:39.762740 kernel: audit: type=1130 audit(1765890399.760:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.32:22-10.200.16.10:58492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.035883 kubelet[3970]: E1216 13:06:40.035366 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:06:40.281000 audit[6202]: USER_ACCT pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.286765 sshd[6202]: Accepted publickey for core from 10.200.16.10 port 58492 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:40.289210 kernel: audit: type=1101 audit(1765890400.281:772): pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.288000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.291703 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:40.297177 kernel: audit: type=1103 audit(1765890400.288:773): pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.302211 kernel: audit: type=1006 audit(1765890400.290:774): pid=6202 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 13:06:40.290000 audit[6202]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6363ce60 a2=3 a3=0 items=0 ppid=1 pid=6202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:40.310177 kernel: audit: type=1300 audit(1765890400.290:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6363ce60 a2=3 a3=0 items=0 ppid=1 pid=6202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:40.290000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:40.316895 systemd-logind[2509]: New session 10 of user core. Dec 16 13:06:40.317261 kernel: audit: type=1327 audit(1765890400.290:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:40.323831 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:06:40.330000 audit[6202]: USER_START pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.339531 kernel: audit: type=1105 audit(1765890400.330:775): pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.339000 audit[6205]: CRED_ACQ pid=6205 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.348176 kernel: audit: type=1103 audit(1765890400.339:776): pid=6205 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.666188 sshd[6205]: Connection closed by 10.200.16.10 port 58492 Dec 16 13:06:40.667369 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:40.668000 audit[6202]: USER_END pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.673048 systemd[1]: sshd@7-10.200.4.32:22-10.200.16.10:58492.service: Deactivated successfully. Dec 16 13:06:40.677585 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:06:40.680066 kernel: audit: type=1106 audit(1765890400.668:777): pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.680137 kernel: audit: type=1104 audit(1765890400.668:778): pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.668000 audit[6202]: CRED_DISP pid=6202 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:40.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.32:22-10.200.16.10:58492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.685225 systemd-logind[2509]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:06:40.686036 systemd-logind[2509]: Removed session 10. Dec 16 13:06:41.040206 kubelet[3970]: E1216 13:06:41.039629 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:06:41.041552 kubelet[3970]: E1216 13:06:41.041503 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:06:45.789869 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:06:45.790062 kernel: audit: type=1130 audit(1765890405.778:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.32:22-10.200.16.10:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:45.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.32:22-10.200.16.10:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:45.779617 systemd[1]: Started sshd@8-10.200.4.32:22-10.200.16.10:33432.service - OpenSSH per-connection server daemon (10.200.16.10:33432). Dec 16 13:06:46.294000 audit[6218]: USER_ACCT pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.298053 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:46.309394 kernel: audit: type=1101 audit(1765890406.294:781): pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.309534 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 33432 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:46.294000 audit[6218]: CRED_ACQ pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.316572 kernel: audit: type=1103 audit(1765890406.294:782): pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.322174 kernel: audit: type=1006 audit(1765890406.294:783): pid=6218 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 13:06:46.294000 audit[6218]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff40d26a00 a2=3 a3=0 items=0 ppid=1 pid=6218 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:46.331491 systemd-logind[2509]: New session 11 of user core. Dec 16 13:06:46.332317 kernel: audit: type=1300 audit(1765890406.294:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff40d26a00 a2=3 a3=0 items=0 ppid=1 pid=6218 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:46.335476 kernel: audit: type=1327 audit(1765890406.294:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:46.294000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:46.335607 update_engine[2510]: I20251216 13:06:46.335204 2510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:06:46.335607 update_engine[2510]: I20251216 13:06:46.335318 2510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:06:46.336196 update_engine[2510]: I20251216 13:06:46.336126 2510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:06:46.336378 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:06:46.350536 kernel: audit: type=1105 audit(1765890406.340:784): pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.340000 audit[6218]: USER_START pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.349000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.358684 kernel: audit: type=1103 audit(1765890406.349:785): pid=6227 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.368113 update_engine[2510]: E20251216 13:06:46.368077 2510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:06:46.368207 update_engine[2510]: I20251216 13:06:46.368173 2510 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:06:46.662346 sshd[6227]: Connection closed by 10.200.16.10 port 33432 Dec 16 13:06:46.663379 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:46.664000 audit[6218]: USER_END pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.672214 systemd[1]: sshd@8-10.200.4.32:22-10.200.16.10:33432.service: Deactivated successfully. Dec 16 13:06:46.664000 audit[6218]: CRED_DISP pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.676746 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:06:46.679377 systemd-logind[2509]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:06:46.680825 kernel: audit: type=1106 audit(1765890406.664:786): pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.680893 kernel: audit: type=1104 audit(1765890406.664:787): pid=6218 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:46.680427 systemd-logind[2509]: Removed session 11. Dec 16 13:06:46.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.32:22-10.200.16.10:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.034754 kubelet[3970]: E1216 13:06:50.034669 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:06:51.039204 kubelet[3970]: E1216 13:06:51.039112 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:06:51.781764 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:06:51.781937 kernel: audit: type=1130 audit(1765890411.777:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.32:22-10.200.16.10:37980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:51.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.32:22-10.200.16.10:37980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:51.778554 systemd[1]: Started sshd@9-10.200.4.32:22-10.200.16.10:37980.service - OpenSSH per-connection server daemon (10.200.16.10:37980). Dec 16 13:06:52.037555 kubelet[3970]: E1216 13:06:52.037142 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:06:52.037555 kubelet[3970]: E1216 13:06:52.037091 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:06:52.295838 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 37980 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:52.306324 kernel: audit: type=1101 audit(1765890412.294:790): pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.294000 audit[6259]: USER_ACCT pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.298002 sshd-session[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:52.296000 audit[6259]: CRED_ACQ pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.320277 kernel: audit: type=1103 audit(1765890412.296:791): pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.320348 kernel: audit: type=1006 audit(1765890412.296:792): pid=6259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 13:06:52.324035 systemd-logind[2509]: New session 12 of user core. Dec 16 13:06:52.296000 audit[6259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe23b7cb90 a2=3 a3=0 items=0 ppid=1 pid=6259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:52.335608 kernel: audit: type=1300 audit(1765890412.296:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe23b7cb90 a2=3 a3=0 items=0 ppid=1 pid=6259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:52.335675 kernel: audit: type=1327 audit(1765890412.296:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:52.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:52.335982 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:06:52.337000 audit[6259]: USER_START pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.349408 kernel: audit: type=1105 audit(1765890412.337:793): pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.348000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.357272 kernel: audit: type=1103 audit(1765890412.348:794): pid=6262 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.659000 sshd[6262]: Connection closed by 10.200.16.10 port 37980 Dec 16 13:06:52.662364 sshd-session[6259]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:52.666000 audit[6259]: USER_END pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.677219 kernel: audit: type=1106 audit(1765890412.666:795): pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.678432 systemd[1]: sshd@9-10.200.4.32:22-10.200.16.10:37980.service: Deactivated successfully. Dec 16 13:06:52.684123 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:06:52.674000 audit[6259]: CRED_DISP pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.694545 kernel: audit: type=1104 audit(1765890412.674:796): pid=6259 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.695733 systemd-logind[2509]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:06:52.698449 systemd-logind[2509]: Removed session 12. Dec 16 13:06:52.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.32:22-10.200.16.10:37980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:52.766768 systemd[1]: Started sshd@10-10.200.4.32:22-10.200.16.10:37994.service - OpenSSH per-connection server daemon (10.200.16.10:37994). Dec 16 13:06:52.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.32:22-10.200.16.10:37994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:53.301000 audit[6275]: USER_ACCT pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.303421 sshd[6275]: Accepted publickey for core from 10.200.16.10 port 37994 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:53.304000 audit[6275]: CRED_ACQ pid=6275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.304000 audit[6275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc50b23aa0 a2=3 a3=0 items=0 ppid=1 pid=6275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:53.304000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:53.304757 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:53.310467 systemd-logind[2509]: New session 13 of user core. Dec 16 13:06:53.315360 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:06:53.316000 audit[6275]: USER_START pid=6275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.318000 audit[6280]: CRED_ACQ pid=6280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.669348 sshd[6280]: Connection closed by 10.200.16.10 port 37994 Dec 16 13:06:53.671182 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:53.672000 audit[6275]: USER_END pid=6275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.672000 audit[6275]: CRED_DISP pid=6275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:53.677332 systemd-logind[2509]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:06:53.677767 systemd[1]: sshd@10-10.200.4.32:22-10.200.16.10:37994.service: Deactivated successfully. Dec 16 13:06:53.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.32:22-10.200.16.10:37994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:53.681671 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:06:53.685567 systemd-logind[2509]: Removed session 13. Dec 16 13:06:53.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.32:22-10.200.16.10:37996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:53.779471 systemd[1]: Started sshd@11-10.200.4.32:22-10.200.16.10:37996.service - OpenSSH per-connection server daemon (10.200.16.10:37996). Dec 16 13:06:54.037226 kubelet[3970]: E1216 13:06:54.036973 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:06:54.299000 audit[6290]: USER_ACCT pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.300444 sshd[6290]: Accepted publickey for core from 10.200.16.10 port 37996 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:54.301793 sshd-session[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:54.301000 audit[6290]: CRED_ACQ pid=6290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.301000 audit[6290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe4a4ebb0 a2=3 a3=0 items=0 ppid=1 pid=6290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:54.301000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:54.311291 systemd-logind[2509]: New session 14 of user core. Dec 16 13:06:54.316633 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:06:54.320000 audit[6290]: USER_START pid=6290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.323000 audit[6293]: CRED_ACQ pid=6293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.667314 sshd[6293]: Connection closed by 10.200.16.10 port 37996 Dec 16 13:06:54.667857 sshd-session[6290]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:54.670000 audit[6290]: USER_END pid=6290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.670000 audit[6290]: CRED_DISP pid=6290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:54.673431 systemd-logind[2509]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:06:54.674986 systemd[1]: sshd@11-10.200.4.32:22-10.200.16.10:37996.service: Deactivated successfully. Dec 16 13:06:54.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.32:22-10.200.16.10:37996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:54.679959 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:06:54.682974 systemd-logind[2509]: Removed session 14. Dec 16 13:06:55.038211 kubelet[3970]: E1216 13:06:55.035514 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:06:56.337050 update_engine[2510]: I20251216 13:06:56.336673 2510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:06:56.337050 update_engine[2510]: I20251216 13:06:56.336825 2510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:06:56.337985 update_engine[2510]: I20251216 13:06:56.337958 2510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:06:56.380447 update_engine[2510]: E20251216 13:06:56.380388 2510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:06:56.380584 update_engine[2510]: I20251216 13:06:56.380486 2510 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:06:59.784014 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:06:59.784445 kernel: audit: type=1130 audit(1765890419.774:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.32:22-10.200.16.10:38006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:59.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.32:22-10.200.16.10:38006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:59.775506 systemd[1]: Started sshd@12-10.200.4.32:22-10.200.16.10:38006.service - OpenSSH per-connection server daemon (10.200.16.10:38006). Dec 16 13:07:00.301600 kernel: audit: type=1101 audit(1765890420.292:817): pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.292000 audit[6311]: USER_ACCT pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.301880 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 38006 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:00.303128 sshd-session[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:00.301000 audit[6311]: CRED_ACQ pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.311309 kernel: audit: type=1103 audit(1765890420.301:818): pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.325701 kernel: audit: type=1006 audit(1765890420.301:819): pid=6311 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 13:07:00.325791 kernel: audit: type=1300 audit(1765890420.301:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2cb7d000 a2=3 a3=0 items=0 ppid=1 pid=6311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:00.301000 audit[6311]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2cb7d000 a2=3 a3=0 items=0 ppid=1 pid=6311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:00.319445 systemd-logind[2509]: New session 15 of user core. Dec 16 13:07:00.329351 kernel: audit: type=1327 audit(1765890420.301:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:00.301000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:00.333371 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:00.335000 audit[6311]: USER_START pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.345186 kernel: audit: type=1105 audit(1765890420.335:820): pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.338000 audit[6314]: CRED_ACQ pid=6314 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.352251 kernel: audit: type=1103 audit(1765890420.338:821): pid=6314 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.686827 sshd[6314]: Connection closed by 10.200.16.10 port 38006 Dec 16 13:07:00.687178 sshd-session[6311]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:00.689000 audit[6311]: USER_END pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.694014 systemd[1]: sshd@12-10.200.4.32:22-10.200.16.10:38006.service: Deactivated successfully. Dec 16 13:07:00.696865 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:00.689000 audit[6311]: CRED_DISP pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.701941 systemd-logind[2509]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:00.703917 systemd-logind[2509]: Removed session 15. Dec 16 13:07:00.708183 kernel: audit: type=1106 audit(1765890420.689:822): pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.708261 kernel: audit: type=1104 audit(1765890420.689:823): pid=6311 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:00.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.32:22-10.200.16.10:38006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:03.036706 kubelet[3970]: E1216 13:07:03.035587 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:07:03.036706 kubelet[3970]: E1216 13:07:03.035659 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:07:04.034271 kubelet[3970]: E1216 13:07:04.034129 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:07:05.800617 systemd[1]: Started sshd@13-10.200.4.32:22-10.200.16.10:39882.service - OpenSSH per-connection server daemon (10.200.16.10:39882). Dec 16 13:07:05.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.32:22-10.200.16.10:39882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.804177 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:05.804272 kernel: audit: type=1130 audit(1765890425.799:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.32:22-10.200.16.10:39882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:06.034995 containerd[2529]: time="2025-12-16T13:07:06.034843436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:06.036472 kubelet[3970]: E1216 13:07:06.035072 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:07:06.310318 containerd[2529]: time="2025-12-16T13:07:06.310261996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:06.313527 containerd[2529]: time="2025-12-16T13:07:06.313497967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:06.313630 containerd[2529]: time="2025-12-16T13:07:06.313605914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:06.313835 kubelet[3970]: E1216 13:07:06.313784 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:06.313919 kubelet[3970]: E1216 13:07:06.313854 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:06.314304 kubelet[3970]: E1216 13:07:06.314266 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7d3f55cc7f4131bcc3c705ab88d498,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:06.317087 containerd[2529]: time="2025-12-16T13:07:06.317050715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:06.338000 audit[6326]: USER_ACCT pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.339541 update_engine[2510]: I20251216 13:07:06.338841 2510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:06.339541 update_engine[2510]: I20251216 13:07:06.338951 2510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:06.339541 update_engine[2510]: I20251216 13:07:06.339362 2510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:06.340016 sshd[6326]: Accepted publickey for core from 10.200.16.10 port 39882 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:06.342008 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:06.347188 kernel: audit: type=1101 audit(1765890426.338:826): pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.340000 audit[6326]: CRED_ACQ pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.349471 systemd-logind[2509]: New session 16 of user core. Dec 16 13:07:06.354300 update_engine[2510]: E20251216 13:07:06.354017 2510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354461 2510 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354482 2510 omaha_request_action.cc:617] Omaha request response: Dec 16 13:07:06.355241 update_engine[2510]: E20251216 13:07:06.354566 2510 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354586 2510 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354590 2510 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354594 2510 update_attempter.cc:306] Processing Done. Dec 16 13:07:06.355241 update_engine[2510]: E20251216 13:07:06.354614 2510 update_attempter.cc:619] Update failed. Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354619 2510 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354624 2510 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354630 2510 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354697 2510 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354727 2510 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:07:06.355241 update_engine[2510]: I20251216 13:07:06.354733 2510 omaha_request_action.cc:272] Request: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355241 update_engine[2510]: Dec 16 13:07:06.355628 update_engine[2510]: I20251216 13:07:06.354738 2510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:07:06.355628 update_engine[2510]: I20251216 13:07:06.354758 2510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:07:06.356898 update_engine[2510]: I20251216 13:07:06.356874 2510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:07:06.357255 locksmithd[2602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:07:06.357732 kernel: audit: type=1103 audit(1765890426.340:827): pid=6326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.357831 kernel: audit: type=1006 audit(1765890426.340:828): pid=6326 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 13:07:06.358523 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:06.340000 audit[6326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffeb91b730 a2=3 a3=0 items=0 ppid=1 pid=6326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:06.364729 kernel: audit: type=1300 audit(1765890426.340:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffeb91b730 a2=3 a3=0 items=0 ppid=1 pid=6326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:06.340000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:06.367983 kernel: audit: type=1327 audit(1765890426.340:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:06.361000 audit[6326]: USER_START pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.372327 kernel: audit: type=1105 audit(1765890426.361:829): pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.363000 audit[6329]: CRED_ACQ pid=6329 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.374409 update_engine[2510]: E20251216 13:07:06.374203 2510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374279 2510 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374286 2510 omaha_request_action.cc:617] Omaha request response: Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374294 2510 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374300 2510 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374305 2510 update_attempter.cc:306] Processing Done. Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374315 2510 update_attempter.cc:310] Error event sent. Dec 16 13:07:06.374409 update_engine[2510]: I20251216 13:07:06.374325 2510 update_check_scheduler.cc:74] Next update check in 45m48s Dec 16 13:07:06.374937 locksmithd[2602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:07:06.381233 kernel: audit: type=1103 audit(1765890426.363:830): pid=6329 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.588275 containerd[2529]: time="2025-12-16T13:07:06.585872913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:06.589971 containerd[2529]: time="2025-12-16T13:07:06.589926508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:06.590088 containerd[2529]: time="2025-12-16T13:07:06.590052091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:06.591443 kubelet[3970]: E1216 13:07:06.591390 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:06.591542 kubelet[3970]: E1216 13:07:06.591466 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:06.591666 kubelet[3970]: E1216 13:07:06.591623 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-848zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b7d557678-kmhrd_calico-system(d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:06.593267 kubelet[3970]: E1216 13:07:06.593220 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:07:06.720602 sshd[6329]: Connection closed by 10.200.16.10 port 39882 Dec 16 13:07:06.723371 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:06.724000 audit[6326]: USER_END pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.728538 systemd-logind[2509]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:06.729387 systemd[1]: sshd@13-10.200.4.32:22-10.200.16.10:39882.service: Deactivated successfully. Dec 16 13:07:06.732879 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:06.735174 kernel: audit: type=1106 audit(1765890426.724:831): pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.736138 systemd-logind[2509]: Removed session 16. Dec 16 13:07:06.724000 audit[6326]: CRED_DISP pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.743250 kernel: audit: type=1104 audit(1765890426.724:832): pid=6326 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:06.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.32:22-10.200.16.10:39882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:10.034870 containerd[2529]: time="2025-12-16T13:07:10.034381481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:10.301336 containerd[2529]: time="2025-12-16T13:07:10.300923761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:10.304131 containerd[2529]: time="2025-12-16T13:07:10.303923371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:10.304131 containerd[2529]: time="2025-12-16T13:07:10.303962310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:10.304682 kubelet[3970]: E1216 13:07:10.304632 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:10.305921 kubelet[3970]: E1216 13:07:10.305216 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:10.305921 kubelet[3970]: E1216 13:07:10.305468 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2n4d_calico-system(7ece9954-95e0-4c99-bc7f-1fc28a21ac1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:10.307478 kubelet[3970]: E1216 13:07:10.307424 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:07:11.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.32:22-10.200.16.10:52870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:11.834468 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:11.834790 kernel: audit: type=1130 audit(1765890431.828:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.32:22-10.200.16.10:52870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:11.828947 systemd[1]: Started sshd@14-10.200.4.32:22-10.200.16.10:52870.service - OpenSSH per-connection server daemon (10.200.16.10:52870). Dec 16 13:07:12.353000 audit[6347]: USER_ACCT pid=6347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.361180 kernel: audit: type=1101 audit(1765890432.353:835): pid=6347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.361299 sshd[6347]: Accepted publickey for core from 10.200.16.10 port 52870 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:12.362004 sshd-session[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:12.360000 audit[6347]: CRED_ACQ pid=6347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.369446 kernel: audit: type=1103 audit(1765890432.360:836): pid=6347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.369515 kernel: audit: type=1006 audit(1765890432.360:837): pid=6347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 13:07:12.360000 audit[6347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3247b5e0 a2=3 a3=0 items=0 ppid=1 pid=6347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:12.373252 kernel: audit: type=1300 audit(1765890432.360:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3247b5e0 a2=3 a3=0 items=0 ppid=1 pid=6347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:12.360000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:12.375809 kernel: audit: type=1327 audit(1765890432.360:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:12.376615 systemd-logind[2509]: New session 17 of user core. Dec 16 13:07:12.387336 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:12.388000 audit[6347]: USER_START pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.395241 kernel: audit: type=1105 audit(1765890432.388:838): pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.395269 kernel: audit: type=1103 audit(1765890432.393:839): pid=6350 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.393000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.704312 sshd[6350]: Connection closed by 10.200.16.10 port 52870 Dec 16 13:07:12.704456 sshd-session[6347]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:12.706000 audit[6347]: USER_END pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.711406 systemd[1]: sshd@14-10.200.4.32:22-10.200.16.10:52870.service: Deactivated successfully. Dec 16 13:07:12.714233 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:07:12.716203 kernel: audit: type=1106 audit(1765890432.706:840): pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.706000 audit[6347]: CRED_DISP pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.716514 systemd-logind[2509]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:07:12.717729 systemd-logind[2509]: Removed session 17. Dec 16 13:07:12.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.32:22-10.200.16.10:52870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:12.723184 kernel: audit: type=1104 audit(1765890432.706:841): pid=6347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:12.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.32:22-10.200.16.10:52880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:12.816439 systemd[1]: Started sshd@15-10.200.4.32:22-10.200.16.10:52880.service - OpenSSH per-connection server daemon (10.200.16.10:52880). Dec 16 13:07:13.332000 audit[6361]: USER_ACCT pid=6361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.333648 sshd[6361]: Accepted publickey for core from 10.200.16.10 port 52880 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:13.333000 audit[6361]: CRED_ACQ pid=6361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.333000 audit[6361]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1a2f8200 a2=3 a3=0 items=0 ppid=1 pid=6361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:13.333000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:13.335333 sshd-session[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:13.345526 systemd-logind[2509]: New session 18 of user core. Dec 16 13:07:13.353305 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:07:13.356000 audit[6361]: USER_START pid=6361 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.359000 audit[6364]: CRED_ACQ pid=6364 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.819418 sshd[6364]: Connection closed by 10.200.16.10 port 52880 Dec 16 13:07:13.820137 sshd-session[6361]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:13.820000 audit[6361]: USER_END pid=6361 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.820000 audit[6361]: CRED_DISP pid=6361 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:13.824551 systemd[1]: sshd@15-10.200.4.32:22-10.200.16.10:52880.service: Deactivated successfully. Dec 16 13:07:13.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.32:22-10.200.16.10:52880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:13.826807 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:07:13.828846 systemd-logind[2509]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:07:13.829959 systemd-logind[2509]: Removed session 18. Dec 16 13:07:13.926556 systemd[1]: Started sshd@16-10.200.4.32:22-10.200.16.10:52886.service - OpenSSH per-connection server daemon (10.200.16.10:52886). Dec 16 13:07:13.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.32:22-10.200.16.10:52886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:14.437000 audit[6374]: USER_ACCT pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:14.437944 sshd[6374]: Accepted publickey for core from 10.200.16.10 port 52886 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:14.437000 audit[6374]: CRED_ACQ pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:14.437000 audit[6374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4802a070 a2=3 a3=0 items=0 ppid=1 pid=6374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:14.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:14.439412 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:14.446225 systemd-logind[2509]: New session 19 of user core. Dec 16 13:07:14.452397 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:07:14.453000 audit[6374]: USER_START pid=6374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:14.455000 audit[6377]: CRED_ACQ pid=6377 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.469000 audit[6388]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=6388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:15.469000 audit[6388]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdbb827750 a2=0 a3=7ffdbb82773c items=0 ppid=4122 pid=6388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:15.474000 audit[6388]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=6388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:15.474000 audit[6388]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdbb827750 a2=0 a3=0 items=0 ppid=4122 pid=6388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:15.586886 sshd[6377]: Connection closed by 10.200.16.10 port 52886 Dec 16 13:07:15.590127 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:15.590000 audit[6374]: USER_END pid=6374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.590000 audit[6374]: CRED_DISP pid=6374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.596733 systemd[1]: sshd@16-10.200.4.32:22-10.200.16.10:52886.service: Deactivated successfully. Dec 16 13:07:15.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.32:22-10.200.16.10:52886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:15.599667 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:07:15.604585 systemd-logind[2509]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:07:15.605965 systemd-logind[2509]: Removed session 19. Dec 16 13:07:15.619000 audit[6393]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=6393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:15.619000 audit[6393]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffecc12ac70 a2=0 a3=7ffecc12ac5c items=0 ppid=4122 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:15.625000 audit[6393]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=6393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:15.625000 audit[6393]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffecc12ac70 a2=0 a3=0 items=0 ppid=4122 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:15.695759 systemd[1]: Started sshd@17-10.200.4.32:22-10.200.16.10:52894.service - OpenSSH per-connection server daemon (10.200.16.10:52894). Dec 16 13:07:15.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.32:22-10.200.16.10:52894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:16.034131 kubelet[3970]: E1216 13:07:16.034088 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:07:16.213000 audit[6395]: USER_ACCT pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.213775 sshd[6395]: Accepted publickey for core from 10.200.16.10 port 52894 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:16.214000 audit[6395]: CRED_ACQ pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.214000 audit[6395]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd529e76a0 a2=3 a3=0 items=0 ppid=1 pid=6395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:16.214000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:16.215489 sshd-session[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:16.224950 systemd-logind[2509]: New session 20 of user core. Dec 16 13:07:16.231379 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:07:16.235000 audit[6395]: USER_START pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.238000 audit[6398]: CRED_ACQ pid=6398 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.819495 sshd[6398]: Connection closed by 10.200.16.10 port 52894 Dec 16 13:07:16.821486 sshd-session[6395]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:16.825000 audit[6395]: USER_END pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.825000 audit[6395]: CRED_DISP pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.830878 systemd[1]: sshd@17-10.200.4.32:22-10.200.16.10:52894.service: Deactivated successfully. Dec 16 13:07:16.831439 systemd-logind[2509]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:07:16.838579 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 13:07:16.838671 kernel: audit: type=1131 audit(1765890436.831:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.32:22-10.200.16.10:52894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:16.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.32:22-10.200.16.10:52894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:16.837480 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:07:16.841117 systemd-logind[2509]: Removed session 20. Dec 16 13:07:16.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.32:22-10.200.16.10:52896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:16.926438 systemd[1]: Started sshd@18-10.200.4.32:22-10.200.16.10:52896.service - OpenSSH per-connection server daemon (10.200.16.10:52896). Dec 16 13:07:16.933172 kernel: audit: type=1130 audit(1765890436.926:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.32:22-10.200.16.10:52896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:17.446313 kernel: audit: type=1101 audit(1765890437.440:875): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.440000 audit[6433]: USER_ACCT pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.445580 sshd-session[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:17.446945 sshd[6433]: Accepted publickey for core from 10.200.16.10 port 52896 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:17.452243 kernel: audit: type=1103 audit(1765890437.443:876): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.443000 audit[6433]: CRED_ACQ pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.458304 systemd-logind[2509]: New session 21 of user core. Dec 16 13:07:17.463268 kernel: audit: type=1006 audit(1765890437.443:877): pid=6433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 13:07:17.462748 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:07:17.469403 kernel: audit: type=1300 audit(1765890437.443:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe52f98860 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:17.443000 audit[6433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe52f98860 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:17.443000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:17.477068 kernel: audit: type=1327 audit(1765890437.443:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:17.477376 kernel: audit: type=1105 audit(1765890437.469:878): pid=6433 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.469000 audit[6433]: USER_START pid=6433 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.478000 audit[6436]: CRED_ACQ pid=6436 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.483203 kernel: audit: type=1103 audit(1765890437.478:879): pid=6436 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.781411 sshd[6436]: Connection closed by 10.200.16.10 port 52896 Dec 16 13:07:17.783633 sshd-session[6433]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:17.791185 kernel: audit: type=1106 audit(1765890437.785:880): pid=6433 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.785000 audit[6433]: USER_END pid=6433 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.791956 systemd[1]: sshd@18-10.200.4.32:22-10.200.16.10:52896.service: Deactivated successfully. Dec 16 13:07:17.785000 audit[6433]: CRED_DISP pid=6433 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:17.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.32:22-10.200.16.10:52896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:17.796994 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:07:17.801187 systemd-logind[2509]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:07:17.803259 systemd-logind[2509]: Removed session 21. Dec 16 13:07:18.036877 kubelet[3970]: E1216 13:07:18.036739 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:07:18.037614 kubelet[3970]: E1216 13:07:18.036809 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:07:18.037681 containerd[2529]: time="2025-12-16T13:07:18.036929926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:18.308245 containerd[2529]: time="2025-12-16T13:07:18.307126756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:18.311766 containerd[2529]: time="2025-12-16T13:07:18.311706354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:18.312046 containerd[2529]: time="2025-12-16T13:07:18.311721511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:18.312221 kubelet[3970]: E1216 13:07:18.312183 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:18.312295 kubelet[3970]: E1216 13:07:18.312237 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:18.312716 kubelet[3970]: E1216 13:07:18.312658 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c8cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-w9zv5_calico-apiserver(5156a21d-5db3-420c-a907-ae3cb29e174c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:18.313885 kubelet[3970]: E1216 13:07:18.313839 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:07:19.037225 containerd[2529]: time="2025-12-16T13:07:19.037173824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:19.299292 containerd[2529]: time="2025-12-16T13:07:19.298679328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:19.302281 containerd[2529]: time="2025-12-16T13:07:19.302117445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:19.302281 containerd[2529]: time="2025-12-16T13:07:19.302135557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:19.302451 kubelet[3970]: E1216 13:07:19.302400 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:19.302723 kubelet[3970]: E1216 13:07:19.302462 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:19.302723 kubelet[3970]: E1216 13:07:19.302659 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57vvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8bf7db64d-zpn94_calico-system(3ce56745-c6a8-40c1-81e7-b66ad27dd817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:19.304127 kubelet[3970]: E1216 13:07:19.304041 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:07:20.451000 audit[6465]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=6465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:20.451000 audit[6465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce40de580 a2=0 a3=7ffce40de56c items=0 ppid=4122 pid=6465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:20.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:20.459000 audit[6465]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=6465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:20.459000 audit[6465]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffce40de580 a2=0 a3=7ffce40de56c items=0 ppid=4122 pid=6465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:20.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:22.890500 systemd[1]: Started sshd@19-10.200.4.32:22-10.200.16.10:49864.service - OpenSSH per-connection server daemon (10.200.16.10:49864). Dec 16 13:07:22.897356 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 13:07:22.897484 kernel: audit: type=1130 audit(1765890442.889:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.32:22-10.200.16.10:49864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:22.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.32:22-10.200.16.10:49864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:23.401000 audit[6474]: USER_ACCT pid=6474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.408040 sshd[6474]: Accepted publickey for core from 10.200.16.10 port 49864 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:23.407848 sshd-session[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:23.409179 kernel: audit: type=1101 audit(1765890443.401:886): pid=6474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.406000 audit[6474]: CRED_ACQ pid=6474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.414234 kernel: audit: type=1103 audit(1765890443.406:887): pid=6474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.419186 kernel: audit: type=1006 audit(1765890443.406:888): pid=6474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 13:07:23.424239 systemd-logind[2509]: New session 22 of user core. Dec 16 13:07:23.406000 audit[6474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff3fa160 a2=3 a3=0 items=0 ppid=1 pid=6474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:23.406000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:23.433196 kernel: audit: type=1300 audit(1765890443.406:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff3fa160 a2=3 a3=0 items=0 ppid=1 pid=6474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:23.433430 kernel: audit: type=1327 audit(1765890443.406:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:23.437398 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:07:23.439000 audit[6474]: USER_START pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.446190 kernel: audit: type=1105 audit(1765890443.439:889): pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.447000 audit[6477]: CRED_ACQ pid=6477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.454185 kernel: audit: type=1103 audit(1765890443.447:890): pid=6477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.750194 sshd[6477]: Connection closed by 10.200.16.10 port 49864 Dec 16 13:07:23.752108 sshd-session[6474]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:23.752000 audit[6474]: USER_END pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.760187 kernel: audit: type=1106 audit(1765890443.752:891): pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.763879 systemd[1]: sshd@19-10.200.4.32:22-10.200.16.10:49864.service: Deactivated successfully. Dec 16 13:07:23.752000 audit[6474]: CRED_DISP pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.769218 kernel: audit: type=1104 audit(1765890443.752:892): pid=6474 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.32:22-10.200.16.10:49864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:23.770764 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:07:23.774960 systemd-logind[2509]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:07:23.775770 systemd-logind[2509]: Removed session 22. Dec 16 13:07:24.034958 kubelet[3970]: E1216 13:07:24.034120 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:07:28.856809 systemd[1]: Started sshd@20-10.200.4.32:22-10.200.16.10:49880.service - OpenSSH per-connection server daemon (10.200.16.10:49880). Dec 16 13:07:28.858346 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:28.858445 kernel: audit: type=1130 audit(1765890448.855:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.32:22-10.200.16.10:49880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:28.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.32:22-10.200.16.10:49880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.039547 containerd[2529]: time="2025-12-16T13:07:29.039436340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:29.310882 containerd[2529]: time="2025-12-16T13:07:29.310817828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:29.314850 containerd[2529]: time="2025-12-16T13:07:29.314790973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:29.314952 containerd[2529]: time="2025-12-16T13:07:29.314930903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:29.315310 kubelet[3970]: E1216 13:07:29.315264 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:29.315675 kubelet[3970]: E1216 13:07:29.315336 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:29.315675 kubelet[3970]: E1216 13:07:29.315510 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:29.319460 containerd[2529]: time="2025-12-16T13:07:29.319422656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:29.384000 audit[6490]: USER_ACCT pid=6490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.388185 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 49880 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:29.389127 sshd-session[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:29.393247 kernel: audit: type=1101 audit(1765890449.384:895): pid=6490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.401330 kernel: audit: type=1103 audit(1765890449.387:896): pid=6490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.387000 audit[6490]: CRED_ACQ pid=6490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.406743 systemd-logind[2509]: New session 23 of user core. Dec 16 13:07:29.419872 kernel: audit: type=1006 audit(1765890449.387:897): pid=6490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 13:07:29.419933 kernel: audit: type=1300 audit(1765890449.387:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdec4031a0 a2=3 a3=0 items=0 ppid=1 pid=6490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:29.387000 audit[6490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdec4031a0 a2=3 a3=0 items=0 ppid=1 pid=6490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:29.418120 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:07:29.423323 kernel: audit: type=1327 audit(1765890449.387:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:29.387000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:29.424000 audit[6490]: USER_START pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.440600 kernel: audit: type=1105 audit(1765890449.424:898): pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.440676 kernel: audit: type=1103 audit(1765890449.433:899): pid=6493 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.433000 audit[6493]: CRED_ACQ pid=6493 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.605342 containerd[2529]: time="2025-12-16T13:07:29.604369471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:29.611549 containerd[2529]: time="2025-12-16T13:07:29.611311891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:29.611549 containerd[2529]: time="2025-12-16T13:07:29.611429706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:29.612360 kubelet[3970]: E1216 13:07:29.612297 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:29.612488 kubelet[3970]: E1216 13:07:29.612379 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:29.612913 kubelet[3970]: E1216 13:07:29.612855 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q4hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-swgr5_calico-system(38439d67-e506-407a-b65d-e7dd3b4f13ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:29.614278 kubelet[3970]: E1216 13:07:29.614147 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:07:29.761473 sshd[6493]: Connection closed by 10.200.16.10 port 49880 Dec 16 13:07:29.763355 sshd-session[6490]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:29.763000 audit[6490]: USER_END pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.769073 systemd[1]: sshd@20-10.200.4.32:22-10.200.16.10:49880.service: Deactivated successfully. Dec 16 13:07:29.776120 kernel: audit: type=1106 audit(1765890449.763:900): pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.776210 kernel: audit: type=1104 audit(1765890449.763:901): pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.763000 audit[6490]: CRED_DISP pid=6490 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.774544 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:07:29.777738 systemd-logind[2509]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:07:29.779782 systemd-logind[2509]: Removed session 23. Dec 16 13:07:29.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.32:22-10.200.16.10:49880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:30.035069 containerd[2529]: time="2025-12-16T13:07:30.034561759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:30.324374 containerd[2529]: time="2025-12-16T13:07:30.324196435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:30.327700 containerd[2529]: time="2025-12-16T13:07:30.327660495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:30.327803 containerd[2529]: time="2025-12-16T13:07:30.327660499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:30.327966 kubelet[3970]: E1216 13:07:30.327923 3970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:30.328387 kubelet[3970]: E1216 13:07:30.327978 3970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:30.328387 kubelet[3970]: E1216 13:07:30.328169 3970 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56b946485d-6kjlw_calico-apiserver(46b8f3f5-9271-4b12-86c1-faf1b8e7af82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:30.329427 kubelet[3970]: E1216 13:07:30.329385 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:07:31.036145 kubelet[3970]: E1216 13:07:31.036057 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:07:31.039519 kubelet[3970]: E1216 13:07:31.039473 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:07:33.035314 kubelet[3970]: E1216 13:07:33.035014 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:07:34.868134 systemd[1]: Started sshd@21-10.200.4.32:22-10.200.16.10:52274.service - OpenSSH per-connection server daemon (10.200.16.10:52274). Dec 16 13:07:34.874087 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:34.874218 kernel: audit: type=1130 audit(1765890454.867:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.32:22-10.200.16.10:52274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:34.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.32:22-10.200.16.10:52274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:35.392000 audit[6505]: USER_ACCT pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.402273 kernel: audit: type=1101 audit(1765890455.392:904): pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.397532 sshd-session[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:35.402675 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 52274 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:35.392000 audit[6505]: CRED_ACQ pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.412223 kernel: audit: type=1103 audit(1765890455.392:905): pid=6505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.416873 kernel: audit: type=1006 audit(1765890455.392:906): pid=6505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 13:07:35.416947 kernel: audit: type=1300 audit(1765890455.392:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff50dcd410 a2=3 a3=0 items=0 ppid=1 pid=6505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:35.392000 audit[6505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff50dcd410 a2=3 a3=0 items=0 ppid=1 pid=6505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:35.416646 systemd-logind[2509]: New session 24 of user core. Dec 16 13:07:35.392000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:35.425168 kernel: audit: type=1327 audit(1765890455.392:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:35.428421 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:07:35.430000 audit[6505]: USER_START pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.440617 kernel: audit: type=1105 audit(1765890455.430:907): pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.440679 kernel: audit: type=1103 audit(1765890455.439:908): pid=6508 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.439000 audit[6508]: CRED_ACQ pid=6508 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.793545 sshd[6508]: Connection closed by 10.200.16.10 port 52274 Dec 16 13:07:35.795976 sshd-session[6505]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:35.796000 audit[6505]: USER_END pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.800861 systemd[1]: sshd@21-10.200.4.32:22-10.200.16.10:52274.service: Deactivated successfully. Dec 16 13:07:35.804084 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:07:35.806181 kernel: audit: type=1106 audit(1765890455.796:909): pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.806814 systemd-logind[2509]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:07:35.796000 audit[6505]: CRED_DISP pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.813241 kernel: audit: type=1104 audit(1765890455.796:910): pid=6505 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:35.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.32:22-10.200.16.10:52274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:35.813596 systemd-logind[2509]: Removed session 24. Dec 16 13:07:38.034659 kubelet[3970]: E1216 13:07:38.034559 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2n4d" podUID="7ece9954-95e0-4c99-bc7f-1fc28a21ac1f" Dec 16 13:07:40.901189 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:40.901346 kernel: audit: type=1130 audit(1765890460.899:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.32:22-10.200.16.10:57996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.32:22-10.200.16.10:57996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.899516 systemd[1]: Started sshd@22-10.200.4.32:22-10.200.16.10:57996.service - OpenSSH per-connection server daemon (10.200.16.10:57996). Dec 16 13:07:41.035174 kubelet[3970]: E1216 13:07:41.034645 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-6kjlw" podUID="46b8f3f5-9271-4b12-86c1-faf1b8e7af82" Dec 16 13:07:41.413397 sshd[6520]: Accepted publickey for core from 10.200.16.10 port 57996 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:41.413000 audit[6520]: USER_ACCT pid=6520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.416589 sshd-session[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:41.421186 kernel: audit: type=1101 audit(1765890461.413:913): pid=6520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.413000 audit[6520]: CRED_ACQ pid=6520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.431167 kernel: audit: type=1103 audit(1765890461.413:914): pid=6520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.438222 systemd-logind[2509]: New session 25 of user core. Dec 16 13:07:41.440178 kernel: audit: type=1006 audit(1765890461.413:915): pid=6520 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 13:07:41.413000 audit[6520]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e60a2d0 a2=3 a3=0 items=0 ppid=1 pid=6520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:41.448166 kernel: audit: type=1300 audit(1765890461.413:915): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e60a2d0 a2=3 a3=0 items=0 ppid=1 pid=6520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:41.413000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:41.450043 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:07:41.453717 kernel: audit: type=1327 audit(1765890461.413:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:41.455000 audit[6520]: USER_START pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.463179 kernel: audit: type=1105 audit(1765890461.455:916): pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.463000 audit[6523]: CRED_ACQ pid=6523 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.473174 kernel: audit: type=1103 audit(1765890461.463:917): pid=6523 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.756085 sshd[6523]: Connection closed by 10.200.16.10 port 57996 Dec 16 13:07:41.757091 sshd-session[6520]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:41.759000 audit[6520]: USER_END pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.765571 systemd-logind[2509]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:07:41.766956 systemd[1]: sshd@22-10.200.4.32:22-10.200.16.10:57996.service: Deactivated successfully. Dec 16 13:07:41.768282 kernel: audit: type=1106 audit(1765890461.759:918): pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.759000 audit[6520]: CRED_DISP pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.772990 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:07:41.778580 systemd-logind[2509]: Removed session 25. Dec 16 13:07:41.779165 kernel: audit: type=1104 audit(1765890461.759:919): pid=6520 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:41.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.32:22-10.200.16.10:57996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:42.035519 kubelet[3970]: E1216 13:07:42.035148 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56b946485d-w9zv5" podUID="5156a21d-5db3-420c-a907-ae3cb29e174c" Dec 16 13:07:42.037619 kubelet[3970]: E1216 13:07:42.036671 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7d557678-kmhrd" podUID="d76f74ff-aa1c-40a0-a82a-3e2f5f1611ac" Dec 16 13:07:42.037619 kubelet[3970]: E1216 13:07:42.037584 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-swgr5" podUID="38439d67-e506-407a-b65d-e7dd3b4f13ff" Dec 16 13:07:44.034903 kubelet[3970]: E1216 13:07:44.034479 3970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8bf7db64d-zpn94" podUID="3ce56745-c6a8-40c1-81e7-b66ad27dd817" Dec 16 13:07:46.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:46.866769 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:46.866862 kernel: audit: type=1130 audit(1765890466.864:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:46.865103 systemd[1]: Started sshd@23-10.200.4.32:22-10.200.16.10:57998.service - OpenSSH per-connection server daemon (10.200.16.10:57998). Dec 16 13:07:47.393000 audit[6560]: USER_ACCT pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.394741 sshd[6560]: Accepted publickey for core from 10.200.16.10 port 57998 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:47.396810 sshd-session[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:47.403180 kernel: audit: type=1101 audit(1765890467.393:922): pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.409838 systemd-logind[2509]: New session 26 of user core. Dec 16 13:07:47.395000 audit[6560]: CRED_ACQ pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.419187 kernel: audit: type=1103 audit(1765890467.395:923): pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.422723 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:07:47.395000 audit[6560]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd64493740 a2=3 a3=0 items=0 ppid=1 pid=6560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:47.433904 kernel: audit: type=1006 audit(1765890467.395:924): pid=6560 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 13:07:47.433965 kernel: audit: type=1300 audit(1765890467.395:924): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd64493740 a2=3 a3=0 items=0 ppid=1 pid=6560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:47.395000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:47.437250 kernel: audit: type=1327 audit(1765890467.395:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:47.423000 audit[6560]: USER_START pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.441510 kernel: audit: type=1105 audit(1765890467.423:925): pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.428000 audit[6563]: CRED_ACQ pid=6563 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.448848 kernel: audit: type=1103 audit(1765890467.428:926): pid=6563 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.746025 sshd[6563]: Connection closed by 10.200.16.10 port 57998 Dec 16 13:07:47.745818 sshd-session[6560]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:47.747000 audit[6560]: USER_END pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.760312 kernel: audit: type=1106 audit(1765890467.747:927): pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.762939 systemd[1]: sshd@23-10.200.4.32:22-10.200.16.10:57998.service: Deactivated successfully. Dec 16 13:07:47.773435 kernel: audit: type=1104 audit(1765890467.747:928): pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.747000 audit[6560]: CRED_DISP pid=6560 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:47.768058 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:07:47.773065 systemd-logind[2509]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:07:47.775639 systemd-logind[2509]: Removed session 26. Dec 16 13:07:47.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.32:22-10.200.16.10:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'