Dec 16 13:09:25.047848 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 13:09:25.047871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:09:25.047882 kernel: BIOS-provided physical RAM map: Dec 16 13:09:25.047888 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:09:25.047894 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:09:25.047901 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:09:25.047909 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:09:25.047915 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:09:25.047922 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:09:25.047929 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:09:25.047935 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:09:25.047941 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:09:25.047947 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:09:25.047954 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:09:25.047961 kernel: NX (Execute Disable) protection: active Dec 16 13:09:25.047969 kernel: APIC: Static calls initialized Dec 16 13:09:25.047976 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:09:25.047984 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 16 13:09:25.047991 kernel: random: crng init done Dec 16 13:09:25.047998 kernel: secureboot: Secure boot disabled Dec 16 13:09:25.048004 kernel: SMBIOS 3.1.0 present. Dec 16 13:09:25.048011 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:09:25.048018 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:09:25.048024 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:09:25.048030 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:09:25.048038 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:09:25.048045 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:09:25.048052 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:09:25.048059 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:09:25.048066 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:09:25.048073 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:09:25.048079 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:09:25.048087 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:09:25.048093 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:09:25.048102 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:09:25.048110 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:09:25.048117 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:09:25.048125 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:09:25.048132 kernel: Using GB pages for direct mapping Dec 16 13:09:25.048140 kernel: ACPI: Early table checksum verification disabled Dec 16 13:09:25.048151 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:09:25.048158 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048165 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048172 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:09:25.048179 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:09:25.048186 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048195 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048202 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048210 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:09:25.048218 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:09:25.048226 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:09:25.048233 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:09:25.048241 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:09:25.048248 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:09:25.048255 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:09:25.048263 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:09:25.048271 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:09:25.048278 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:09:25.048286 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:09:25.048296 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:09:25.048302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:09:25.048309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:09:25.048316 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:09:25.048323 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:09:25.048331 kernel: Zone ranges: Dec 16 13:09:25.048339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:09:25.048348 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:09:25.048356 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:09:25.048364 kernel: Device empty Dec 16 13:09:25.048370 kernel: Movable zone start for each node Dec 16 13:09:25.048378 kernel: Early memory node ranges Dec 16 13:09:25.048385 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:09:25.048392 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:09:25.048400 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:09:25.048408 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:09:25.048416 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:09:25.048424 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:09:25.048431 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:09:25.048438 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:09:25.048445 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:09:25.048452 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:09:25.048461 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:09:25.048469 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:09:25.048477 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:09:25.048485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:09:25.048492 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:09:25.048500 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:09:25.048507 kernel: TSC deadline timer available Dec 16 13:09:25.048515 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:09:25.048522 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:09:25.048529 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:09:25.048536 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:09:25.048544 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:09:25.048552 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:09:25.048560 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:09:25.048568 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:09:25.048576 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:09:25.048583 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:09:25.048590 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:09:25.048597 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:09:25.048605 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:09:25.048613 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:09:25.048622 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:09:25.048630 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:09:25.048638 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:09:25.048645 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:09:25.048652 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:09:25.048659 kernel: Fallback order for Node 0: 0 Dec 16 13:09:25.048668 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:09:25.048676 kernel: Policy zone: Normal Dec 16 13:09:25.048684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:09:25.048692 kernel: software IO TLB: area num 2. Dec 16 13:09:25.048729 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:09:25.048738 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:09:25.048745 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:09:25.048753 kernel: Dynamic Preempt: voluntary Dec 16 13:09:25.048763 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:09:25.048772 kernel: rcu: RCU event tracing is enabled. Dec 16 13:09:25.048784 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:09:25.048793 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:09:25.048801 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:09:25.048809 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:09:25.048817 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:09:25.048826 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:09:25.048834 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:09:25.048843 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:09:25.048851 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:09:25.048859 kernel: Using NULL legacy PIC Dec 16 13:09:25.048866 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:09:25.048876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:09:25.048884 kernel: Console: colour dummy device 80x25 Dec 16 13:09:25.048893 kernel: printk: legacy console [tty1] enabled Dec 16 13:09:25.048901 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:09:25.048909 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:09:25.048917 kernel: ACPI: Core revision 20240827 Dec 16 13:09:25.048924 kernel: Failed to register legacy timer interrupt Dec 16 13:09:25.048933 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:09:25.048942 kernel: x2apic enabled Dec 16 13:09:25.048950 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:09:25.048958 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:09:25.048967 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:09:25.048974 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:09:25.048982 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:09:25.048991 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:09:25.048998 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:09:25.049007 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:09:25.049015 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:09:25.049024 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:09:25.049032 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:09:25.049040 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:09:25.049049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:09:25.049056 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:09:25.049064 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:09:25.049072 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:09:25.049080 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:09:25.049088 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:09:25.049095 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:09:25.049103 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:09:25.049112 kernel: RETBleed: Vulnerable Dec 16 13:09:25.049119 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:09:25.049126 kernel: active return thunk: its_return_thunk Dec 16 13:09:25.049134 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:09:25.049142 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:09:25.049149 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:09:25.049157 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:09:25.049165 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:09:25.049173 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:09:25.049180 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:09:25.049188 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:09:25.049196 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:09:25.049204 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:09:25.049211 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:09:25.049219 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:09:25.049227 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:09:25.049234 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:09:25.049241 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:09:25.049249 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:09:25.049256 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:09:25.049263 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:09:25.049273 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:09:25.049280 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:09:25.049288 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:09:25.049296 kernel: landlock: Up and running. Dec 16 13:09:25.049304 kernel: SELinux: Initializing. Dec 16 13:09:25.049311 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:09:25.049318 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:09:25.049326 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:09:25.049334 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:09:25.049342 kernel: signal: max sigframe size: 11952 Dec 16 13:09:25.049352 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:09:25.049361 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:09:25.049369 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:09:25.049376 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:09:25.049384 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:09:25.049391 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:09:25.049399 kernel: .... node #0, CPUs: #1 Dec 16 13:09:25.049407 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:09:25.049417 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:09:25.049426 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 283604K reserved, 0K cma-reserved) Dec 16 13:09:25.049434 kernel: devtmpfs: initialized Dec 16 13:09:25.049442 kernel: x86/mm: Memory block size: 128MB Dec 16 13:09:25.049450 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:09:25.049457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:09:25.049465 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:09:25.049475 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:09:25.049483 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:09:25.049492 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:09:25.049500 kernel: audit: type=2000 audit(1765890557.070:1): state=initialized audit_enabled=0 res=1 Dec 16 13:09:25.049507 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:09:25.049515 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:09:25.049522 kernel: cpuidle: using governor menu Dec 16 13:09:25.049531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:09:25.049540 kernel: dca service started, version 1.12.1 Dec 16 13:09:25.049549 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:09:25.049557 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:09:25.049565 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:09:25.049572 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:09:25.049580 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:09:25.049589 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:09:25.049597 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:09:25.049605 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:09:25.049613 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:09:25.049622 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:09:25.049630 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:09:25.049638 kernel: ACPI: Interpreter enabled Dec 16 13:09:25.049647 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:09:25.049655 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:09:25.049662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:09:25.049670 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:09:25.049679 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:09:25.049687 kernel: iommu: Default domain type: Translated Dec 16 13:09:25.049695 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:09:25.049713 kernel: efivars: Registered efivars operations Dec 16 13:09:25.049721 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:09:25.049728 kernel: PCI: System does not support PCI Dec 16 13:09:25.049736 kernel: vgaarb: loaded Dec 16 13:09:25.049745 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:09:25.049753 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:09:25.049761 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:09:25.049770 kernel: pnp: PnP ACPI init Dec 16 13:09:25.049778 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:09:25.049785 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:09:25.049793 kernel: NET: Registered PF_INET protocol family Dec 16 13:09:25.049801 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:09:25.049810 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:09:25.049818 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:09:25.049828 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:09:25.049836 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:09:25.049843 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:09:25.049851 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:09:25.049859 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:09:25.049867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:09:25.049876 kernel: NET: Registered PF_XDP protocol family Dec 16 13:09:25.049885 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:09:25.049894 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:09:25.049901 kernel: software IO TLB: mapped [mem 0x000000003a9ba000-0x000000003e9ba000] (64MB) Dec 16 13:09:25.049909 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:09:25.049917 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:09:25.049924 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:09:25.049932 kernel: clocksource: Switched to clocksource tsc Dec 16 13:09:25.049942 kernel: Initialise system trusted keyrings Dec 16 13:09:25.049950 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:09:25.049959 kernel: Key type asymmetric registered Dec 16 13:09:25.049966 kernel: Asymmetric key parser 'x509' registered Dec 16 13:09:25.049974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:09:25.049982 kernel: io scheduler mq-deadline registered Dec 16 13:09:25.049989 kernel: io scheduler kyber registered Dec 16 13:09:25.049999 kernel: io scheduler bfq registered Dec 16 13:09:25.050007 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:09:25.050016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:09:25.050024 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:09:25.050033 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:09:25.050040 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:09:25.050048 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:09:25.050191 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:09:25.050284 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:09:19 UTC (1765890559) Dec 16 13:09:25.050370 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:09:25.050379 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:09:25.050387 kernel: efifb: probing for efifb Dec 16 13:09:25.050395 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:09:25.050405 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:09:25.050414 kernel: efifb: scrolling: redraw Dec 16 13:09:25.050422 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:09:25.050431 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:09:25.050438 kernel: fb0: EFI VGA frame buffer device Dec 16 13:09:25.050446 kernel: pstore: Using crash dump compression: deflate Dec 16 13:09:25.050453 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:09:25.050461 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:09:25.050470 kernel: Segment Routing with IPv6 Dec 16 13:09:25.050478 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:09:25.050487 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:09:25.050495 kernel: Key type dns_resolver registered Dec 16 13:09:25.050502 kernel: IPI shorthand broadcast: enabled Dec 16 13:09:25.050510 kernel: sched_clock: Marking stable (1746004521, 89678184)->(2107030121, -271347416) Dec 16 13:09:25.050517 kernel: registered taskstats version 1 Dec 16 13:09:25.050526 kernel: Loading compiled-in X.509 certificates Dec 16 13:09:25.050534 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 13:09:25.050542 kernel: Demotion targets for Node 0: null Dec 16 13:09:25.050551 kernel: Key type .fscrypt registered Dec 16 13:09:25.050559 kernel: Key type fscrypt-provisioning registered Dec 16 13:09:25.050567 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:09:25.050575 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:09:25.050584 kernel: ima: No architecture policies found Dec 16 13:09:25.050591 kernel: clk: Disabling unused clocks Dec 16 13:09:25.050599 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 13:09:25.050608 kernel: Write protecting the kernel read-only data: 47104k Dec 16 13:09:25.050616 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 13:09:25.050624 kernel: Run /init as init process Dec 16 13:09:25.050633 kernel: with arguments: Dec 16 13:09:25.050641 kernel: /init Dec 16 13:09:25.050649 kernel: with environment: Dec 16 13:09:25.050656 kernel: HOME=/ Dec 16 13:09:25.050664 kernel: TERM=linux Dec 16 13:09:25.050672 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:09:25.050680 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:09:25.050689 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:09:25.050697 kernel: PTP clock support registered Dec 16 13:09:25.050715 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:09:25.050723 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:09:25.050730 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:09:25.050739 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:09:25.050747 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:09:25.050755 kernel: SCSI subsystem initialized Dec 16 13:09:25.050764 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:09:25.050885 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:09:25.050983 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:09:25.051092 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:09:25.051190 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:09:25.051308 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:09:25.051416 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:09:25.051515 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:09:25.051620 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:09:25.051629 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:09:25.051751 kernel: scsi host0: storvsc_host_t Dec 16 13:09:25.051870 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:09:25.051881 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:09:25.051889 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:09:25.051897 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 13:09:25.051999 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:09:25.052010 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:09:25.052020 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 13:09:25.052138 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:09:25.052282 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:09:25.052385 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:09:25.052399 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:09:25.052528 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:09:25.052545 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:09:25.052678 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:09:25.052695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:09:25.052716 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:09:25.052725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:09:25.052735 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:09:25.052757 kernel: raid6: avx512x4 gen() 46968 MB/s Dec 16 13:09:25.052770 kernel: raid6: avx512x2 gen() 46382 MB/s Dec 16 13:09:25.052778 kernel: raid6: avx512x1 gen() 30205 MB/s Dec 16 13:09:25.052787 kernel: raid6: avx2x4 gen() 43041 MB/s Dec 16 13:09:25.052796 kernel: raid6: avx2x2 gen() 44310 MB/s Dec 16 13:09:25.052805 kernel: raid6: avx2x1 gen() 29233 MB/s Dec 16 13:09:25.052813 kernel: raid6: using algorithm avx512x4 gen() 46968 MB/s Dec 16 13:09:25.052827 kernel: raid6: .... xor() 7817 MB/s, rmw enabled Dec 16 13:09:25.052836 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:09:25.052845 kernel: xor: automatically using best checksumming function avx Dec 16 13:09:25.052854 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:09:25.052863 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (974) Dec 16 13:09:25.052872 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 13:09:25.052881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:09:25.052892 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:09:25.052900 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:09:25.052909 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:09:25.052917 kernel: loop: module loaded Dec 16 13:09:25.052927 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 13:09:25.052936 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:09:25.052948 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:09:25.052966 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:09:25.052976 systemd[1]: Detected virtualization microsoft. Dec 16 13:09:25.052985 systemd[1]: Detected architecture x86-64. Dec 16 13:09:25.052993 systemd[1]: Running in initrd. Dec 16 13:09:25.053002 systemd[1]: No hostname configured, using default hostname. Dec 16 13:09:25.053011 systemd[1]: Hostname set to . Dec 16 13:09:25.053020 systemd[1]: Initializing machine ID from random generator. Dec 16 13:09:25.053028 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:09:25.053036 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:09:25.053044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:09:25.053054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:09:25.053062 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:09:25.053072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:09:25.053081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:09:25.053090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:09:25.053099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:09:25.053109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:09:25.053282 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:09:25.053294 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:09:25.053303 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:09:25.053313 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:09:25.053322 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:09:25.053334 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:09:25.053345 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:09:25.053354 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:09:25.053362 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:09:25.053371 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:09:25.053380 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:09:25.053390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:09:25.053400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:09:25.053409 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:09:25.053419 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:09:25.053427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:09:25.053436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:09:25.053465 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:09:25.053472 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:09:25.053479 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:09:25.053485 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:09:25.053490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:09:25.053496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:09:25.053503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:09:25.053509 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:09:25.053514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:09:25.053535 systemd-journald[1108]: Collecting audit messages is enabled. Dec 16 13:09:25.053551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:09:25.053557 kernel: audit: type=1130 audit(1765890565.045:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.053564 systemd-journald[1108]: Journal started Dec 16 13:09:25.053578 systemd-journald[1108]: Runtime Journal (/run/log/journal/08da8a8275bb4e9c8e60375f20a7f61d) is 8M, max 158.5M, 150.5M free. Dec 16 13:09:25.053851 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:09:25.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.060227 kernel: audit: type=1130 audit(1765890565.053:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.056471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:09:25.206718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:09:25.293717 kernel: Bridge firewalling registered Dec 16 13:09:25.293692 systemd-modules-load[1111]: Inserted module 'br_netfilter' Dec 16 13:09:25.295237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:09:25.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.300726 kernel: audit: type=1130 audit(1765890565.295:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.329449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:25.333927 systemd-tmpfiles[1123]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:09:25.338945 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:09:25.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.344942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:09:25.345718 kernel: audit: type=1130 audit(1765890565.340:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.347815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:09:25.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.355840 kernel: audit: type=1130 audit(1765890565.344:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.407004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:09:25.413733 kernel: audit: type=1130 audit(1765890565.405:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.415265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:25.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.427026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:25.434105 kernel: audit: type=1130 audit(1765890565.422:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.434125 kernel: audit: type=1130 audit(1765890565.425:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.429815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:09:25.439000 audit: BPF prog-id=6 op=LOAD Dec 16 13:09:25.443612 kernel: audit: type=1334 audit(1765890565.439:10): prog-id=6 op=LOAD Dec 16 13:09:25.442602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:09:25.456440 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:09:25.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.465796 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:09:25.472646 kernel: audit: type=1130 audit(1765890565.459:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.575401 systemd-resolved[1139]: Positive Trust Anchors: Dec 16 13:09:25.575412 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:09:25.575415 systemd-resolved[1139]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:09:25.575445 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:09:25.606323 dracut-cmdline[1151]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:09:25.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:25.609612 systemd-resolved[1139]: Defaulting to hostname 'linux'. Dec 16 13:09:25.610329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:09:25.610816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:09:25.728717 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:09:25.941722 kernel: iscsi: registered transport (tcp) Dec 16 13:09:26.005873 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:09:26.005924 kernel: QLogic iSCSI HBA Driver Dec 16 13:09:26.063588 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:09:26.082914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:09:26.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.088110 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:09:26.120193 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:09:26.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.123399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:09:26.128679 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:09:26.152929 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:09:26.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.154000 audit: BPF prog-id=7 op=LOAD Dec 16 13:09:26.154000 audit: BPF prog-id=8 op=LOAD Dec 16 13:09:26.158944 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:09:26.188863 systemd-udevd[1392]: Using default interface naming scheme 'v257'. Dec 16 13:09:26.200725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:09:26.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.206739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:09:26.218892 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:09:26.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.222000 audit: BPF prog-id=9 op=LOAD Dec 16 13:09:26.225814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:09:26.229948 dracut-pre-trigger[1484]: rd.md=0: removing MD RAID activation Dec 16 13:09:26.250751 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:09:26.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.252820 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:09:26.278901 systemd-networkd[1495]: lo: Link UP Dec 16 13:09:26.278906 systemd-networkd[1495]: lo: Gained carrier Dec 16 13:09:26.282229 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:09:26.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.286621 systemd[1]: Reached target network.target - Network. Dec 16 13:09:26.301788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:09:26.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.307528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:09:26.376169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:09:26.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.376220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:26.376674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:09:26.624353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:09:26.740722 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:09:26.746844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:09:26.749993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:26.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:26.820520 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:09:26.822053 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:09:26.848741 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:09:26.857716 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234f8d1 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:09:26.865926 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:09:27.056084 systemd-networkd[1495]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:09:27.059750 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:09:27.064109 systemd-networkd[1495]: eth0: Link UP Dec 16 13:09:27.064202 systemd-networkd[1495]: eth0: Gained carrier Dec 16 13:09:27.064213 systemd-networkd[1495]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:09:27.073887 kernel: AES CTR mode by8 optimization enabled Dec 16 13:09:27.088757 systemd-networkd[1495]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 13:09:27.120678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:09:27.130993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:09:27.152897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:09:27.221909 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:09:27.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:27.224829 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:09:27.228050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:09:27.229958 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:09:27.261770 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:09:27.414364 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:09:27.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:27.874196 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:09:27.874384 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:09:27.876932 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:09:27.878263 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:09:27.882779 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:09:27.885767 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:09:27.889874 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:09:27.891741 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:09:27.903762 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:09:27.903945 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:09:27.907806 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:09:28.010865 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:09:28.018717 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:09:28.021138 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234f8d1 eth0: VF registering: eth1 Dec 16 13:09:28.021302 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:09:28.025324 systemd-networkd[1495]: eth1: Interface name change detected, renamed to enP30832s1. Dec 16 13:09:28.028826 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:09:28.126733 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:09:28.129725 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:09:28.131480 systemd-networkd[1495]: enP30832s1: Link UP Dec 16 13:09:28.134767 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234f8d1 eth0: Data path switched to VF: enP30832s1 Dec 16 13:09:28.132466 systemd-networkd[1495]: enP30832s1: Gained carrier Dec 16 13:09:28.471200 disk-uuid[1664]: Warning: The kernel is still using the old partition table. Dec 16 13:09:28.471200 disk-uuid[1664]: The new table will be used at the next reboot or after you Dec 16 13:09:28.471200 disk-uuid[1664]: run partprobe(8) or kpartx(8) Dec 16 13:09:28.471200 disk-uuid[1664]: The operation has completed successfully. Dec 16 13:09:28.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.477528 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:09:28.477606 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:09:28.483128 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:09:28.532873 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1715) Dec 16 13:09:28.532998 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:09:28.534477 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:09:28.551971 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:09:28.552004 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:09:28.553204 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:09:28.558714 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:09:28.559416 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:09:28.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.562744 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:09:28.932796 systemd-networkd[1495]: eth0: Gained IPv6LL Dec 16 13:09:30.167335 ignition[1734]: Ignition 2.24.0 Dec 16 13:09:30.167346 ignition[1734]: Stage: fetch-offline Dec 16 13:09:30.169068 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:09:30.178236 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 13:09:30.178261 kernel: audit: type=1130 audit(1765890570.171:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.167444 ignition[1734]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:30.167452 ignition[1734]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:30.178826 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:09:30.167517 ignition[1734]: parsed url from cmdline: "" Dec 16 13:09:30.167519 ignition[1734]: no config URL provided Dec 16 13:09:30.167523 ignition[1734]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:09:30.167530 ignition[1734]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:09:30.167535 ignition[1734]: failed to fetch config: resource requires networking Dec 16 13:09:30.167697 ignition[1734]: Ignition finished successfully Dec 16 13:09:30.204602 ignition[1740]: Ignition 2.24.0 Dec 16 13:09:30.204612 ignition[1740]: Stage: fetch Dec 16 13:09:30.204847 ignition[1740]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:30.204855 ignition[1740]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:30.204938 ignition[1740]: parsed url from cmdline: "" Dec 16 13:09:30.204940 ignition[1740]: no config URL provided Dec 16 13:09:30.204945 ignition[1740]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:09:30.204950 ignition[1740]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:09:30.204969 ignition[1740]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:09:30.312797 ignition[1740]: GET result: OK Dec 16 13:09:30.312886 ignition[1740]: config has been read from IMDS userdata Dec 16 13:09:30.312908 ignition[1740]: parsing config with SHA512: 836d2ade5dde5a0e4cbbd125ec516267640e7ddf7f937f95be8868b693baf4f12b73a7d5ef9adf6f8dc4a2de3effb9383f2f3267ae0256e51ca3df10dea264d4 Dec 16 13:09:30.319516 unknown[1740]: fetched base config from "system" Dec 16 13:09:30.319523 unknown[1740]: fetched base config from "system" Dec 16 13:09:30.321010 ignition[1740]: fetch: fetch complete Dec 16 13:09:30.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.319528 unknown[1740]: fetched user config from "azure" Dec 16 13:09:30.333868 kernel: audit: type=1130 audit(1765890570.324:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.321013 ignition[1740]: fetch: fetch passed Dec 16 13:09:30.322449 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:09:30.321041 ignition[1740]: Ignition finished successfully Dec 16 13:09:30.328814 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:09:30.353446 ignition[1747]: Ignition 2.24.0 Dec 16 13:09:30.353456 ignition[1747]: Stage: kargs Dec 16 13:09:30.353645 ignition[1747]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:30.353652 ignition[1747]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:30.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.357221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:09:30.367156 kernel: audit: type=1130 audit(1765890570.359:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.354315 ignition[1747]: kargs: kargs passed Dec 16 13:09:30.362714 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:09:30.354344 ignition[1747]: Ignition finished successfully Dec 16 13:09:30.384209 ignition[1753]: Ignition 2.24.0 Dec 16 13:09:30.384219 ignition[1753]: Stage: disks Dec 16 13:09:30.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.386119 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:09:30.403820 kernel: audit: type=1130 audit(1765890570.387:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.384410 ignition[1753]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:30.389212 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:09:30.384416 ignition[1753]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:30.394007 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:09:30.385081 ignition[1753]: disks: disks passed Dec 16 13:09:30.394914 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:09:30.385109 ignition[1753]: Ignition finished successfully Dec 16 13:09:30.394937 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:09:30.394956 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:09:30.396804 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:09:30.501526 systemd-fsck[1761]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 13:09:30.505413 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:09:30.512790 kernel: audit: type=1130 audit(1765890570.507:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:30.513397 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:09:30.795658 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:09:30.797562 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 13:09:30.796276 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:09:30.824038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:09:30.841779 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:09:30.845643 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:09:30.849427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:09:30.849729 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:09:30.858144 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1770) Dec 16 13:09:30.858169 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:09:30.858181 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:09:30.863611 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:09:30.870104 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:09:30.870129 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:09:30.870143 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:09:30.866830 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:09:30.872911 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:09:32.111341 coreos-metadata[1772]: Dec 16 13:09:32.111 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:09:32.117631 coreos-metadata[1772]: Dec 16 13:09:32.117 INFO Fetch successful Dec 16 13:09:32.118770 coreos-metadata[1772]: Dec 16 13:09:32.117 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:09:32.125903 coreos-metadata[1772]: Dec 16 13:09:32.125 INFO Fetch successful Dec 16 13:09:32.140359 coreos-metadata[1772]: Dec 16 13:09:32.140 INFO wrote hostname ci-4547.0.0-a-e647365c22 to /sysroot/etc/hostname Dec 16 13:09:32.143599 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:09:32.151828 kernel: audit: type=1130 audit(1765890572.146:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:32.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:34.961008 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:09:34.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:34.968547 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:09:34.972815 kernel: audit: type=1130 audit(1765890574.962:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:34.970761 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:09:35.023279 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:09:35.027650 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:09:35.043426 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:09:35.050819 kernel: audit: type=1130 audit(1765890575.044:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:35.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:35.051120 ignition[1875]: INFO : Ignition 2.24.0 Dec 16 13:09:35.051120 ignition[1875]: INFO : Stage: mount Dec 16 13:09:35.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:35.058664 ignition[1875]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:35.058664 ignition[1875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:35.058664 ignition[1875]: INFO : mount: mount passed Dec 16 13:09:35.058664 ignition[1875]: INFO : Ignition finished successfully Dec 16 13:09:35.067826 kernel: audit: type=1130 audit(1765890575.053:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:35.053879 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:09:35.059814 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:09:35.073855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:09:35.105718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1888) Dec 16 13:09:35.105748 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:09:35.107720 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:09:35.115646 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:09:35.115677 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:09:35.115689 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:09:35.117354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:09:35.139172 ignition[1904]: INFO : Ignition 2.24.0 Dec 16 13:09:35.139172 ignition[1904]: INFO : Stage: files Dec 16 13:09:35.142843 ignition[1904]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:35.142843 ignition[1904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:35.142843 ignition[1904]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:09:35.169269 ignition[1904]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:09:35.169269 ignition[1904]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:09:35.253988 ignition[1904]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:09:35.257780 ignition[1904]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:09:35.257780 ignition[1904]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:09:35.256594 unknown[1904]: wrote ssh authorized keys file for user: core Dec 16 13:09:35.269898 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:09:35.272916 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:09:35.425404 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:09:35.750520 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:09:35.750520 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:09:35.758755 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:09:35.790731 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:09:35.790731 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:09:35.790731 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:09:36.250993 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:09:36.884955 ignition[1904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:09:36.884955 ignition[1904]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:09:36.954334 ignition[1904]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:09:36.960297 ignition[1904]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:09:36.960297 ignition[1904]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:09:36.960297 ignition[1904]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:09:36.971107 kernel: audit: type=1130 audit(1765890576.964:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.971175 ignition[1904]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:09:36.971175 ignition[1904]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:09:36.971175 ignition[1904]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:09:36.971175 ignition[1904]: INFO : files: files passed Dec 16 13:09:36.971175 ignition[1904]: INFO : Ignition finished successfully Dec 16 13:09:36.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.965551 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:09:36.998784 kernel: audit: type=1130 audit(1765890576.988:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.998813 kernel: audit: type=1131 audit(1765890576.989:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:36.970561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:09:36.983817 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:09:36.986585 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:09:37.020220 kernel: audit: type=1130 audit(1765890577.011:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.020286 initrd-setup-root-after-ignition[1936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:09:37.020286 initrd-setup-root-after-ignition[1936]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:09:36.986662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:09:37.026805 initrd-setup-root-after-ignition[1941]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:09:37.009608 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:09:37.013159 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:09:37.017943 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:09:37.058835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:09:37.069394 kernel: audit: type=1130 audit(1765890577.057:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.069416 kernel: audit: type=1131 audit(1765890577.061:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.058914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:09:37.062132 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:09:37.062383 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:09:37.065930 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:09:37.066556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:09:37.094952 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:09:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.099822 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:09:37.106804 kernel: audit: type=1130 audit(1765890577.094:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.112324 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:09:37.112519 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:09:37.116864 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:09:37.119475 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:09:37.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.120720 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:09:37.132749 kernel: audit: type=1131 audit(1765890577.121:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.120854 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:09:37.123548 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:09:37.130915 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:09:37.136681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:09:37.139967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:09:37.146830 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:09:37.147377 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:09:37.151841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:09:37.155851 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:09:37.158295 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:09:37.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.160751 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:09:37.161779 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:09:37.162000 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:09:37.162126 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:09:37.169729 kernel: audit: type=1131 audit(1765890577.160:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.172757 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:09:37.174176 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:09:37.176872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:09:37.178112 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:09:37.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.180818 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:09:37.193925 kernel: audit: type=1131 audit(1765890577.183:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.180940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:09:37.185044 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:09:37.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.185159 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:09:37.191451 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:09:37.191576 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:09:37.193994 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:09:37.194111 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:09:37.198896 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:09:37.204171 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:09:37.212138 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:09:37.212536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:09:37.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.221897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:09:37.222005 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:09:37.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.231565 ignition[1961]: INFO : Ignition 2.24.0 Dec 16 13:09:37.231565 ignition[1961]: INFO : Stage: umount Dec 16 13:09:37.231565 ignition[1961]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:09:37.231565 ignition[1961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:09:37.231565 ignition[1961]: INFO : umount: umount passed Dec 16 13:09:37.231565 ignition[1961]: INFO : Ignition finished successfully Dec 16 13:09:37.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.228942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:09:37.229071 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:09:37.244933 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:09:37.245016 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:09:37.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.250721 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:09:37.250792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:09:37.255084 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:09:37.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.255157 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:09:37.258802 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:09:37.258848 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:09:37.259745 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:09:37.259780 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:09:37.260262 systemd[1]: Stopped target network.target - Network. Dec 16 13:09:37.260288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:09:37.260320 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:09:37.260569 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:09:37.260591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:09:37.266738 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:09:37.270150 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:09:37.272472 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:09:37.283223 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:09:37.283259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:09:37.286167 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:09:37.286199 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:09:37.294804 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:09:37.294832 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:09:37.298768 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:09:37.298815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:09:37.306776 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:09:37.306811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:09:37.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.312427 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:09:37.314777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:09:37.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.316324 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:09:37.316819 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:09:37.316886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:09:37.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.317360 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:09:37.317425 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:09:37.325872 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:09:37.325959 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:09:37.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.335000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:09:37.335000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:09:37.330571 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:09:37.330644 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:09:37.337558 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:09:37.340389 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:09:37.340417 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:09:37.344790 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:09:37.349166 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:09:37.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.349215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:09:37.354877 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:09:37.354928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:37.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.362783 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:09:37.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.362827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:09:37.367497 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:09:37.377640 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:09:37.382841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:09:37.392782 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234f8d1 eth0: Data path switched from VF: enP30832s1 Dec 16 13:09:37.392960 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:09:37.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.386607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:09:37.386641 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:09:37.391345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:09:37.391372 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:09:37.398204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:09:37.398256 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:09:37.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.401825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:09:37.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.401859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:09:37.404949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:09:37.404985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:09:37.415924 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:09:37.417378 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:09:37.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.417429 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:09:37.420436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:09:37.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:37.420486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:09:37.420640 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:09:37.420668 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:09:37.420907 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:09:37.420935 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:09:37.421149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:09:37.421179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:37.423607 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:09:37.423681 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:09:37.434286 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:09:37.434366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:09:37.434997 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:09:37.436795 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:09:37.453728 systemd[1]: Switching root. Dec 16 13:09:37.510836 systemd-journald[1108]: Journal stopped Dec 16 13:09:45.719559 systemd-journald[1108]: Received SIGTERM from PID 1 (systemd). Dec 16 13:09:45.719578 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:09:45.719588 kernel: SELinux: policy capability open_perms=1 Dec 16 13:09:45.719595 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:09:45.719602 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:09:45.719608 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:09:45.719615 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:09:45.719621 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:09:45.719630 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:09:45.719635 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:09:45.719643 systemd[1]: Successfully loaded SELinux policy in 219.312ms. Dec 16 13:09:45.719651 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.853ms. Dec 16 13:09:45.719659 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:09:45.719668 systemd[1]: Detected virtualization microsoft. Dec 16 13:09:45.719675 systemd[1]: Detected architecture x86-64. Dec 16 13:09:45.719682 systemd[1]: Detected first boot. Dec 16 13:09:45.719689 systemd[1]: Hostname set to . Dec 16 13:09:45.719734 systemd[1]: Initializing machine ID from random generator. Dec 16 13:09:45.719752 zram_generator::config[2004]: No configuration found. Dec 16 13:09:45.719759 kernel: Guest personality initialized and is inactive Dec 16 13:09:45.719765 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:09:45.719771 kernel: Initialized host personality Dec 16 13:09:45.719776 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:09:45.719782 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:09:45.719790 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 16 13:09:45.719797 kernel: audit: type=1334 audit(1765890584.044:92): prog-id=12 op=LOAD Dec 16 13:09:45.719802 kernel: audit: type=1334 audit(1765890584.044:93): prog-id=3 op=UNLOAD Dec 16 13:09:45.719808 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:09:45.719814 kernel: audit: type=1334 audit(1765890584.044:94): prog-id=13 op=LOAD Dec 16 13:09:45.719819 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:09:45.719827 kernel: audit: type=1334 audit(1765890584.044:95): prog-id=14 op=LOAD Dec 16 13:09:45.719832 kernel: audit: type=1334 audit(1765890584.044:96): prog-id=4 op=UNLOAD Dec 16 13:09:45.719838 kernel: audit: type=1334 audit(1765890584.044:97): prog-id=5 op=UNLOAD Dec 16 13:09:45.719844 kernel: audit: type=1131 audit(1765890584.045:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.719849 kernel: audit: type=1334 audit(1765890584.059:99): prog-id=12 op=UNLOAD Dec 16 13:09:45.719855 kernel: audit: type=1130 audit(1765890584.062:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.719862 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:09:45.719868 kernel: audit: type=1131 audit(1765890584.062:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.719876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:09:45.719883 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:09:45.719891 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:09:45.719897 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:09:45.719905 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:09:45.719911 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:09:45.719917 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:09:45.719924 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:09:45.719930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:09:45.719937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:09:45.719944 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:09:45.719950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:09:45.719956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:09:45.719962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:09:45.719968 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:09:45.719974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:09:45.719982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:09:45.719988 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:09:45.719994 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:09:45.719999 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:09:45.720006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:09:45.720012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:09:45.720018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:09:45.720025 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:09:45.720031 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:09:45.720037 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:09:45.720043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:09:45.720048 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:09:45.720057 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:09:45.720063 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:09:45.720069 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:09:45.720075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:09:45.720081 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:09:45.720088 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:09:45.720094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:09:45.720100 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:09:45.720107 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:09:45.720113 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:09:45.720119 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:09:45.720125 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:09:45.720132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:45.720138 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:09:45.720144 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:09:45.720150 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:09:45.720156 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:09:45.720162 systemd[1]: Reached target machines.target - Containers. Dec 16 13:09:45.720168 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:09:45.720176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:09:45.720182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:09:45.720189 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:09:45.720195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:09:45.720201 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:09:45.720207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:09:45.720214 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:09:45.720221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:09:45.720227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:09:45.720233 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:09:45.720239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:09:45.720245 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:09:45.720251 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:09:45.720259 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:09:45.720265 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:09:45.720271 kernel: fuse: init (API version 7.41) Dec 16 13:09:45.720277 kernel: ACPI: bus type drm_connector registered Dec 16 13:09:45.720282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:09:45.720289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:09:45.720295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:09:45.720302 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:09:45.720308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:09:45.720314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:45.720321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:09:45.720327 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:09:45.720333 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:09:45.720341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:09:45.720347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:09:45.720353 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:09:45.720359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:09:45.720365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:09:45.720371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:09:45.720377 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:09:45.720388 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:09:45.720394 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:09:45.720401 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:09:45.720407 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:09:45.720414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:09:45.720420 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:09:45.720426 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:09:45.720432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:09:45.720440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:09:45.720447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:09:45.720453 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:09:45.720460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:09:45.720467 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:09:45.720474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:09:45.720481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:09:45.720487 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:09:45.720493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:09:45.720500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:09:45.720507 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:09:45.720514 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:09:45.720520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:09:45.720526 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:09:45.720532 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:09:45.720538 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:09:45.720545 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:09:45.720552 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:09:45.720558 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:09:45.720565 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:09:45.720573 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:09:45.720580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:09:45.720587 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:09:45.720593 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:09:45.720601 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:09:45.720607 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:09:45.720627 systemd-journald[2085]: Collecting audit messages is enabled. Dec 16 13:09:45.720643 systemd-journald[2085]: Journal started Dec 16 13:09:45.720658 systemd-journald[2085]: Runtime Journal (/run/log/journal/37c0ed3ca1554e38abaa70616ac1ddbd) is 8M, max 158.5M, 150.5M free. Dec 16 13:09:44.197000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:09:44.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:44.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:44.797000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:09:44.797000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:09:44.798000 audit: BPF prog-id=15 op=LOAD Dec 16 13:09:44.798000 audit: BPF prog-id=16 op=LOAD Dec 16 13:09:44.798000 audit: BPF prog-id=17 op=LOAD Dec 16 13:09:45.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.717000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:09:45.717000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdbe6c3520 a2=4000 a3=0 items=0 ppid=1 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:45.717000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:09:44.037623 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:09:45.723408 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:09:44.046208 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:09:44.046618 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:09:45.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.731408 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:09:45.787489 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Dec 16 13:09:45.787505 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Dec 16 13:09:45.789692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:09:45.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.806241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:09:45.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.808226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:45.868554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:45.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.910716 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 13:09:45.940757 systemd-journald[2085]: Time spent on flushing to /var/log/journal/37c0ed3ca1554e38abaa70616ac1ddbd is 8.944ms for 1143 entries. Dec 16 13:09:45.940757 systemd-journald[2085]: System Journal (/var/log/journal/37c0ed3ca1554e38abaa70616ac1ddbd) is 8M, max 2.2G, 2.2G free. Dec 16 13:09:45.982079 systemd-journald[2085]: Received client request to flush runtime journal. Dec 16 13:09:45.982897 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:09:45.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.110717 kernel: loop2: detected capacity change from 0 to 50784 Dec 16 13:09:46.265209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:09:46.512731 kernel: loop3: detected capacity change from 0 to 27728 Dec 16 13:09:46.623024 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:09:46.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.629111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:09:46.879874 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:09:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.882000 audit: BPF prog-id=18 op=LOAD Dec 16 13:09:46.882000 audit: BPF prog-id=19 op=LOAD Dec 16 13:09:46.882000 audit: BPF prog-id=20 op=LOAD Dec 16 13:09:46.888000 audit: BPF prog-id=21 op=LOAD Dec 16 13:09:46.886831 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:09:46.891013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:09:46.895856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:09:46.903000 audit: BPF prog-id=22 op=LOAD Dec 16 13:09:46.903000 audit: BPF prog-id=23 op=LOAD Dec 16 13:09:46.903000 audit: BPF prog-id=24 op=LOAD Dec 16 13:09:46.905577 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:09:46.907000 audit: BPF prog-id=25 op=LOAD Dec 16 13:09:46.908000 audit: BPF prog-id=26 op=LOAD Dec 16 13:09:46.908000 audit: BPF prog-id=27 op=LOAD Dec 16 13:09:46.911205 systemd-tmpfiles[2168]: ACLs are not supported, ignoring. Dec 16 13:09:46.911223 systemd-tmpfiles[2168]: ACLs are not supported, ignoring. Dec 16 13:09:46.911918 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:09:46.917813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:09:46.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.944763 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:09:46.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.946000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:09:46.946000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:09:46.946000 audit: BPF prog-id=28 op=LOAD Dec 16 13:09:46.946000 audit: BPF prog-id=29 op=LOAD Dec 16 13:09:46.950820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:09:46.953905 systemd-nsresourced[2170]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:09:46.955195 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:09:46.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.974856 systemd-udevd[2174]: Using default interface naming scheme 'v257'. Dec 16 13:09:47.033039 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:09:47.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.132458 systemd-oomd[2166]: No swap; memory pressure usage will be degraded Dec 16 13:09:47.133005 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:09:47.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.144191 systemd-resolved[2167]: Positive Trust Anchors: Dec 16 13:09:47.144205 systemd-resolved[2167]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:09:47.144208 systemd-resolved[2167]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:09:47.144235 systemd-resolved[2167]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:09:47.234732 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 13:09:47.254505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:09:47.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.257000 audit: BPF prog-id=30 op=LOAD Dec 16 13:09:47.260226 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:09:47.311750 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:09:47.327500 systemd-resolved[2167]: Using system hostname 'ci-4547.0.0-a-e647365c22'. Dec 16 13:09:47.329528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:09:47.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.332965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:09:47.347780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:09:47.408716 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:09:47.425718 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:09:47.427717 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:09:47.429924 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:09:47.434381 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:09:47.471765 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:09:47.472763 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:09:47.564902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:09:47.572049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:09:47.572200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:47.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.574423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:09:47.685721 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:09:47.910559 systemd-networkd[2197]: lo: Link UP Dec 16 13:09:47.910565 systemd-networkd[2197]: lo: Gained carrier Dec 16 13:09:47.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.911726 systemd-networkd[2197]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:09:47.911733 systemd-networkd[2197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:09:47.911881 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:09:47.912036 systemd[1]: Reached target network.target - Network. Dec 16 13:09:47.914104 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:09:47.917377 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:09:47.920725 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:09:47.923829 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:09:47.924371 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234f8d1 eth0: Data path switched to VF: enP30832s1 Dec 16 13:09:47.924617 systemd-networkd[2197]: enP30832s1: Link UP Dec 16 13:09:47.924724 systemd-networkd[2197]: eth0: Link UP Dec 16 13:09:47.924727 systemd-networkd[2197]: eth0: Gained carrier Dec 16 13:09:47.924741 systemd-networkd[2197]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:09:47.928986 systemd-networkd[2197]: enP30832s1: Gained carrier Dec 16 13:09:47.937739 systemd-networkd[2197]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 13:09:47.961716 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:09:48.094517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:09:48.098189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:09:48.166622 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:09:48.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:48.174670 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:09:48.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:48.429732 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 13:09:48.441721 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 13:09:48.452718 kernel: loop7: detected capacity change from 0 to 27728 Dec 16 13:09:48.461744 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 13:09:48.517775 (sd-merge)[2272]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 13:09:48.519863 (sd-merge)[2272]: Merged extensions into '/usr'. Dec 16 13:09:48.522773 systemd[1]: Reload requested from client PID 2106 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:09:48.522787 systemd[1]: Reloading... Dec 16 13:09:48.570783 zram_generator::config[2306]: No configuration found. Dec 16 13:09:48.741414 systemd[1]: Reloading finished in 218 ms. Dec 16 13:09:48.763212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:09:48.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:48.771578 systemd[1]: Starting ensure-sysext.service... Dec 16 13:09:48.772000 audit: BPF prog-id=31 op=LOAD Dec 16 13:09:48.772000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:09:48.772000 audit: BPF prog-id=32 op=LOAD Dec 16 13:09:48.772000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:09:48.772000 audit: BPF prog-id=33 op=LOAD Dec 16 13:09:48.772000 audit: BPF prog-id=34 op=LOAD Dec 16 13:09:48.772000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:09:48.772000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:09:48.774000 audit: BPF prog-id=35 op=LOAD Dec 16 13:09:48.774000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:09:48.774000 audit: BPF prog-id=36 op=LOAD Dec 16 13:09:48.774000 audit: BPF prog-id=37 op=LOAD Dec 16 13:09:48.774000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:09:48.774000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:09:48.775000 audit: BPF prog-id=38 op=LOAD Dec 16 13:09:48.775000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:09:48.775000 audit: BPF prog-id=39 op=LOAD Dec 16 13:09:48.775000 audit: BPF prog-id=40 op=LOAD Dec 16 13:09:48.775000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:09:48.775000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:09:48.773825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:09:48.779000 audit: BPF prog-id=41 op=LOAD Dec 16 13:09:48.779000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:09:48.779000 audit: BPF prog-id=42 op=LOAD Dec 16 13:09:48.779000 audit: BPF prog-id=43 op=LOAD Dec 16 13:09:48.779000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:09:48.779000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:09:48.780000 audit: BPF prog-id=44 op=LOAD Dec 16 13:09:48.780000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:09:48.780000 audit: BPF prog-id=45 op=LOAD Dec 16 13:09:48.780000 audit: BPF prog-id=46 op=LOAD Dec 16 13:09:48.780000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:09:48.780000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:09:48.788576 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:09:48.788809 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:09:48.789004 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:09:48.789972 systemd-tmpfiles[2365]: ACLs are not supported, ignoring. Dec 16 13:09:48.790062 systemd-tmpfiles[2365]: ACLs are not supported, ignoring. Dec 16 13:09:48.790277 systemd[1]: Reload requested from client PID 2364 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:09:48.790292 systemd[1]: Reloading... Dec 16 13:09:48.822446 systemd-tmpfiles[2365]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:09:48.822454 systemd-tmpfiles[2365]: Skipping /boot Dec 16 13:09:48.829326 systemd-tmpfiles[2365]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:09:48.829408 systemd-tmpfiles[2365]: Skipping /boot Dec 16 13:09:48.852757 zram_generator::config[2396]: No configuration found. Dec 16 13:09:49.010327 systemd[1]: Reloading finished in 218 ms. Dec 16 13:09:49.021373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:09:49.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.024000 audit: BPF prog-id=47 op=LOAD Dec 16 13:09:49.024000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:09:49.024000 audit: BPF prog-id=48 op=LOAD Dec 16 13:09:49.025000 audit: BPF prog-id=49 op=LOAD Dec 16 13:09:49.025000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:09:49.025000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:09:49.025000 audit: BPF prog-id=50 op=LOAD Dec 16 13:09:49.025000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:09:49.025000 audit: BPF prog-id=51 op=LOAD Dec 16 13:09:49.025000 audit: BPF prog-id=52 op=LOAD Dec 16 13:09:49.025000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:09:49.025000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:09:49.026000 audit: BPF prog-id=53 op=LOAD Dec 16 13:09:49.026000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:09:49.026000 audit: BPF prog-id=54 op=LOAD Dec 16 13:09:49.026000 audit: BPF prog-id=55 op=LOAD Dec 16 13:09:49.026000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:09:49.026000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:09:49.028843 systemd-networkd[2197]: eth0: Gained IPv6LL Dec 16 13:09:49.030000 audit: BPF prog-id=56 op=LOAD Dec 16 13:09:49.030000 audit: BPF prog-id=57 op=LOAD Dec 16 13:09:49.030000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:09:49.030000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:09:49.030000 audit: BPF prog-id=58 op=LOAD Dec 16 13:09:49.031000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:09:49.032000 audit: BPF prog-id=59 op=LOAD Dec 16 13:09:49.032000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:09:49.032000 audit: BPF prog-id=60 op=LOAD Dec 16 13:09:49.032000 audit: BPF prog-id=61 op=LOAD Dec 16 13:09:49.032000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:09:49.032000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:09:49.032000 audit: BPF prog-id=62 op=LOAD Dec 16 13:09:49.032000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:09:49.035861 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:09:49.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.037674 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:09:49.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.048016 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:09:49.049443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.050283 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:09:49.054950 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:09:49.059877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:09:49.060750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:09:49.063869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:09:49.068401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:09:49.070661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:09:49.070848 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:09:49.071902 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:09:49.074329 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:09:49.075427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:09:49.080791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:09:49.083144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.085233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:09:49.085732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:09:49.090247 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 13:09:49.090288 kernel: audit: type=1127 audit(1765890589.085:232): pid=2469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.085000 audit[2469]: SYSTEM_BOOT pid=2469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.090988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:09:49.091163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:09:49.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.094770 kernel: audit: type=1130 audit(1765890589.089:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.097748 kernel: audit: type=1131 audit(1765890589.089:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.097159 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:09:49.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.099041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:09:49.103370 kernel: audit: type=1130 audit(1765890589.093:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.103483 kernel: audit: type=1131 audit(1765890589.093:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.112027 kernel: audit: type=1130 audit(1765890589.104:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.112081 kernel: audit: type=1131 audit(1765890589.104:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.118136 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:09:49.127108 kernel: audit: type=1130 audit(1765890589.120:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.123534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.123749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:09:49.124693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:09:49.133125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:09:49.136790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:09:49.139064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:09:49.139218 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:09:49.139309 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:09:49.139395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.140497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:09:49.140768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:09:49.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.142791 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:09:49.143506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:09:49.149474 kernel: audit: type=1130 audit(1765890589.141:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.149519 kernel: audit: type=1131 audit(1765890589.141:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.153142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:09:49.153329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:09:49.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.159632 systemd[1]: Finished ensure-sysext.service. Dec 16 13:09:49.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.163258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.163411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:09:49.164074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:09:49.168746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:09:49.176773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:09:49.180181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:09:49.180263 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:09:49.180292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:09:49.180320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:09:49.180356 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:09:49.182403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:09:49.182825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:09:49.183765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:09:49.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.186244 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:09:49.186778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:09:49.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.196362 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:09:49.196537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:09:49.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.199031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:09:49.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.202183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:09:49.796000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:09:49.796000 audit[2507]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdd45b950 a2=420 a3=0 items=0 ppid=2462 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:49.796000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:09:49.798380 augenrules[2507]: No rules Dec 16 13:09:49.799270 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:09:49.799459 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:09:50.270441 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:09:50.272236 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:09:57.648155 ldconfig[2467]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:09:57.657458 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:09:57.660118 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:09:57.675836 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:09:57.677347 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:09:57.679820 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:09:57.681425 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:09:57.684743 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:09:57.686215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:09:57.688794 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:09:57.690362 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:09:57.692783 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:09:57.695744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:09:57.697173 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:09:57.697199 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:09:57.698217 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:09:57.713225 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:09:57.715434 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:09:57.718380 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:09:57.721890 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:09:57.723391 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:09:57.726966 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:09:57.730954 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:09:57.732739 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:09:57.735392 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:09:57.736626 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:09:57.739788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:09:57.739811 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:09:57.741463 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:09:57.743159 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:09:57.748311 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:09:57.757977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:09:57.760283 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:09:57.764394 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:09:57.769049 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:09:57.772353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:09:57.774046 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:09:57.776761 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:09:57.778452 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:09:57.780824 jq[2527]: false Dec 16 13:09:57.780402 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:09:57.781383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:57.791184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:09:57.792507 KVP[2530]: KVP starting; pid is:2530 Dec 16 13:09:57.794861 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:09:57.798488 KVP[2530]: KVP LIC Version: 3.1 Dec 16 13:09:57.798731 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:09:57.798761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:09:57.805995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:09:57.822888 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:09:57.827154 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:09:57.829454 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:09:57.829856 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:09:57.831863 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:09:57.840501 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Refreshing passwd entry cache Dec 16 13:09:57.838153 oslogin_cache_refresh[2529]: Refreshing passwd entry cache Dec 16 13:09:57.842345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:09:57.847258 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:09:57.847460 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:09:57.853215 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:09:57.853747 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:09:57.859563 jq[2542]: true Dec 16 13:09:57.869085 jq[2556]: true Dec 16 13:09:57.874954 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Failure getting users, quitting Dec 16 13:09:57.875008 oslogin_cache_refresh[2529]: Failure getting users, quitting Dec 16 13:09:57.875063 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:09:57.875084 oslogin_cache_refresh[2529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:09:57.875142 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Refreshing group entry cache Dec 16 13:09:57.875168 oslogin_cache_refresh[2529]: Refreshing group entry cache Dec 16 13:09:57.894754 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Failure getting groups, quitting Dec 16 13:09:57.894754 google_oslogin_nss_cache[2529]: oslogin_cache_refresh[2529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:09:57.892937 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:09:57.891773 oslogin_cache_refresh[2529]: Failure getting groups, quitting Dec 16 13:09:57.893171 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:09:57.891780 oslogin_cache_refresh[2529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:09:57.899617 chronyd[2519]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:09:57.904192 chronyd[2519]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:09:57.904481 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:09:57.904312 chronyd[2519]: Loaded seccomp filter (level 2) Dec 16 13:09:57.925166 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:09:57.926602 extend-filesystems[2528]: Found /dev/nvme0n1p6 Dec 16 13:09:57.934713 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:09:57.934913 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:09:57.962572 extend-filesystems[2528]: Found /dev/nvme0n1p9 Dec 16 13:09:57.968005 extend-filesystems[2528]: Checking size of /dev/nvme0n1p9 Dec 16 13:09:57.972373 tar[2553]: linux-amd64/LICENSE Dec 16 13:09:57.974833 tar[2553]: linux-amd64/helm Dec 16 13:09:57.984399 systemd-logind[2540]: New seat seat0. Dec 16 13:09:57.987039 systemd-logind[2540]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 13:09:57.988130 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:09:57.991431 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:09:58.026510 bash[2581]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:09:58.026627 update_engine[2541]: I20251216 13:09:58.026544 2541 main.cc:92] Flatcar Update Engine starting Dec 16 13:09:58.027240 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:09:58.030406 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:09:58.033417 extend-filesystems[2528]: Resized partition /dev/nvme0n1p9 Dec 16 13:09:58.051565 sshd_keygen[2566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:09:58.077974 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:09:58.083828 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:09:58.087779 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:09:58.108798 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:09:58.113845 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:09:58.116827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:09:58.124846 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:09:58.130591 extend-filesystems[2613]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:09:58.133673 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:09:58.133529 dbus-daemon[2522]: [system] SELinux support is enabled Dec 16 13:09:58.138160 update_engine[2541]: I20251216 13:09:58.138046 2541 update_check_scheduler.cc:74] Next update check in 2m1s Dec 16 13:09:58.148738 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 13:09:58.174464 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 16 13:09:58.140218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:09:58.148488 dbus-daemon[2522]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:09:58.140249 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:09:58.142392 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:09:58.142410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:09:58.145323 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:09:58.150871 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:09:58.160219 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:09:58.164007 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:09:58.169872 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:09:58.172090 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:09:58.187831 extend-filesystems[2613]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:09:58.187831 extend-filesystems[2613]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 13:09:58.187831 extend-filesystems[2613]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 16 13:09:58.205388 extend-filesystems[2528]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:09:58.191991 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:09:58.194045 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:09:58.264842 coreos-metadata[2521]: Dec 16 13:09:58.263 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:09:58.267965 coreos-metadata[2521]: Dec 16 13:09:58.266 INFO Fetch successful Dec 16 13:09:58.267965 coreos-metadata[2521]: Dec 16 13:09:58.267 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:09:58.272524 coreos-metadata[2521]: Dec 16 13:09:58.272 INFO Fetch successful Dec 16 13:09:58.274208 coreos-metadata[2521]: Dec 16 13:09:58.273 INFO Fetching http://168.63.129.16/machine/1cb29f0f-1123-480d-a460-d50a7d380ec8/aae485be%2Dfda1%2D4103%2Da8fe%2Da3553ccac307.%5Fci%2D4547.0.0%2Da%2De647365c22?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:09:58.277242 coreos-metadata[2521]: Dec 16 13:09:58.276 INFO Fetch successful Dec 16 13:09:58.277242 coreos-metadata[2521]: Dec 16 13:09:58.277 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:09:58.287767 coreos-metadata[2521]: Dec 16 13:09:58.287 INFO Fetch successful Dec 16 13:09:58.333666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:09:58.335894 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:09:58.556378 tar[2553]: linux-amd64/README.md Dec 16 13:09:58.574663 locksmithd[2637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:09:58.578426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:09:58.937541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:58.946894 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:09:59.337231 kubelet[2676]: E1216 13:09:59.337203 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:09:59.338836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:09:59.338917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:09:59.339176 systemd[1]: kubelet.service: Consumed 772ms CPU time, 264.7M memory peak. Dec 16 13:09:59.901968 containerd[2582]: time="2025-12-16T13:09:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:09:59.902529 containerd[2582]: time="2025-12-16T13:09:59.902499487Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:09:59.909462 containerd[2582]: time="2025-12-16T13:09:59.909432643Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.508µs" Dec 16 13:09:59.909462 containerd[2582]: time="2025-12-16T13:09:59.909455632Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:09:59.909544 containerd[2582]: time="2025-12-16T13:09:59.909487659Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:09:59.909544 containerd[2582]: time="2025-12-16T13:09:59.909498438Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:09:59.909633 containerd[2582]: time="2025-12-16T13:09:59.909618940Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:09:59.909652 containerd[2582]: time="2025-12-16T13:09:59.909631732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909690 containerd[2582]: time="2025-12-16T13:09:59.909672983Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909690 containerd[2582]: time="2025-12-16T13:09:59.909684170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909864 containerd[2582]: time="2025-12-16T13:09:59.909847442Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909864 containerd[2582]: time="2025-12-16T13:09:59.909858181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909900 containerd[2582]: time="2025-12-16T13:09:59.909867056Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:09:59.909900 containerd[2582]: time="2025-12-16T13:09:59.909873698Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910003 containerd[2582]: time="2025-12-16T13:09:59.909989139Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910003 containerd[2582]: time="2025-12-16T13:09:59.909998943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910085 containerd[2582]: time="2025-12-16T13:09:59.910069713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910193 containerd[2582]: time="2025-12-16T13:09:59.910177147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910220 containerd[2582]: time="2025-12-16T13:09:59.910197692Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:09:59.910220 containerd[2582]: time="2025-12-16T13:09:59.910206116Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:09:59.910252 containerd[2582]: time="2025-12-16T13:09:59.910240919Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:09:59.910473 containerd[2582]: time="2025-12-16T13:09:59.910450000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:09:59.910542 containerd[2582]: time="2025-12-16T13:09:59.910526964Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924283047Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924328442Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924661756Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924677005Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924687823Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924697295Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924720045Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924727934Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924737469Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924748802Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924758742Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924768766Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924778307Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:09:59.925000 containerd[2582]: time="2025-12-16T13:09:59.924789419Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924888119Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924905382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924917906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924927650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924938828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924948086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924960267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924975876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924989747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.924999079Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.925007510Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.925027129Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.925069573Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.925079936Z" level=info msg="Start snapshots syncer" Dec 16 13:09:59.925267 containerd[2582]: time="2025-12-16T13:09:59.925091215Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:09:59.925519 containerd[2582]: time="2025-12-16T13:09:59.925310027Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:09:59.925519 containerd[2582]: time="2025-12-16T13:09:59.925345470Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925379727Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925446351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925461163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925470525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925479065Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925488683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925497221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925506674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925515524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925524076Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925540475Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925550253Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:09:59.925637 containerd[2582]: time="2025-12-16T13:09:59.925557772Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925566019Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925571903Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925580687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925590353Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925605854Z" level=info msg="runtime interface created" Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925610147Z" level=info msg="created NRI interface" Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925621219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925630549Z" level=info msg="Connect containerd service" Dec 16 13:09:59.925864 containerd[2582]: time="2025-12-16T13:09:59.925647370Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:09:59.926322 containerd[2582]: time="2025-12-16T13:09:59.926299235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:10:00.681751 waagent[2635]: 2025-12-16T13:10:00.681661Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:10:00.683254 waagent[2635]: 2025-12-16T13:10:00.683206Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 13:10:00.685783 waagent[2635]: 2025-12-16T13:10:00.685748Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:10:00.687909 waagent[2635]: 2025-12-16T13:10:00.687870Z INFO Daemon Daemon Run daemon Dec 16 13:10:00.690830 waagent[2635]: 2025-12-16T13:10:00.690801Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 13:10:00.696372 waagent[2635]: 2025-12-16T13:10:00.694794Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:10:00.696831 waagent[2635]: 2025-12-16T13:10:00.696793Z INFO Daemon Daemon Activate resource disk Dec 16 13:10:00.698660 waagent[2635]: 2025-12-16T13:10:00.698625Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:10:00.702919 waagent[2635]: 2025-12-16T13:10:00.702888Z INFO Daemon Daemon Found device: None Dec 16 13:10:00.703812 waagent[2635]: 2025-12-16T13:10:00.703790Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:10:00.705529 waagent[2635]: 2025-12-16T13:10:00.705511Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:10:00.710161 waagent[2635]: 2025-12-16T13:10:00.710119Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:10:00.711541 waagent[2635]: 2025-12-16T13:10:00.711509Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:10:00.719773 waagent[2635]: 2025-12-16T13:10:00.719648Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:10:00.724175 waagent[2635]: 2025-12-16T13:10:00.724138Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:10:00.727782 waagent[2635]: 2025-12-16T13:10:00.727746Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:10:00.730780 waagent[2635]: 2025-12-16T13:10:00.730750Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:10:00.800914 containerd[2582]: time="2025-12-16T13:10:00.800887533Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:10:00.800985 containerd[2582]: time="2025-12-16T13:10:00.800938332Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:10:00.800985 containerd[2582]: time="2025-12-16T13:10:00.800961587Z" level=info msg="Start subscribing containerd event" Dec 16 13:10:00.801026 containerd[2582]: time="2025-12-16T13:10:00.800985902Z" level=info msg="Start recovering state" Dec 16 13:10:00.801095 containerd[2582]: time="2025-12-16T13:10:00.801081077Z" level=info msg="Start event monitor" Dec 16 13:10:00.801115 containerd[2582]: time="2025-12-16T13:10:00.801094033Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:10:00.801115 containerd[2582]: time="2025-12-16T13:10:00.801102221Z" level=info msg="Start streaming server" Dec 16 13:10:00.801115 containerd[2582]: time="2025-12-16T13:10:00.801109962Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:10:00.801177 containerd[2582]: time="2025-12-16T13:10:00.801116380Z" level=info msg="runtime interface starting up..." Dec 16 13:10:00.801177 containerd[2582]: time="2025-12-16T13:10:00.801122286Z" level=info msg="starting plugins..." Dec 16 13:10:00.801177 containerd[2582]: time="2025-12-16T13:10:00.801133257Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:10:00.801240 containerd[2582]: time="2025-12-16T13:10:00.801225689Z" level=info msg="containerd successfully booted in 0.899688s" Dec 16 13:10:00.801887 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:10:00.803943 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:10:00.806997 systemd[1]: Startup finished in 6.150s (kernel) + 15.260s (initrd) + 21.910s (userspace) = 43.321s. Dec 16 13:10:00.864943 waagent[2635]: 2025-12-16T13:10:00.864834Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:10:00.885994 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:10:00.886942 waagent[2635]: 2025-12-16T13:10:00.886906Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:10:00.888040 waagent[2635]: 2025-12-16T13:10:00.887970Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:10:00.888660 waagent[2635]: 2025-12-16T13:10:00.888637Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:10:00.890417 waagent[2635]: 2025-12-16T13:10:00.890393Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:10:00.891575 waagent[2635]: 2025-12-16T13:10:00.891551Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:10:00.892870 waagent[2635]: 2025-12-16T13:10:00.892659Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:10:00.904320 waagent[2635]: 2025-12-16T13:10:00.904288Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:10:00.904808 waagent[2635]: 2025-12-16T13:10:00.904533Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:10:00.904808 waagent[2635]: 2025-12-16T13:10:00.904657Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:10:01.093761 waagent[2635]: 2025-12-16T13:10:01.093696Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:10:01.095016 waagent[2635]: 2025-12-16T13:10:01.094954Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:10:01.100033 waagent[2635]: 2025-12-16T13:10:01.100002Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:10:01.116779 waagent[2635]: 2025-12-16T13:10:01.116750Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:10:01.118048 waagent[2635]: 2025-12-16T13:10:01.118018Z INFO Daemon Dec 16 13:10:01.118716 waagent[2635]: 2025-12-16T13:10:01.118656Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 25699cd1-1b0f-4bc1-b461-ea3ba7d2ad10 eTag: 14674670774205934765 source: Fabric] Dec 16 13:10:01.120942 waagent[2635]: 2025-12-16T13:10:01.120914Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:10:01.122404 waagent[2635]: 2025-12-16T13:10:01.122377Z INFO Daemon Dec 16 13:10:01.123085 waagent[2635]: 2025-12-16T13:10:01.123029Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:10:01.127978 waagent[2635]: 2025-12-16T13:10:01.127954Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:10:01.250117 login[2643]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:01.250117 login[2641]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:01.255596 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:10:01.259907 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:10:01.266269 systemd-logind[2540]: New session 2 of user core. Dec 16 13:10:01.274199 systemd-logind[2540]: New session 1 of user core. Dec 16 13:10:01.281435 waagent[2635]: 2025-12-16T13:10:01.279490Z INFO Daemon Downloaded certificate {'thumbprint': '49D7A131674914D5589C2D21F4CCD8240BE0AE16', 'hasPrivateKey': True} Dec 16 13:10:01.282810 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:10:01.282939 waagent[2635]: 2025-12-16T13:10:01.282898Z INFO Daemon Fetch goal state completed Dec 16 13:10:01.285906 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:10:01.296107 (systemd)[2720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:01.297697 systemd-logind[2540]: New session 3 of user core. Dec 16 13:10:01.326379 waagent[2635]: 2025-12-16T13:10:01.326360Z INFO Daemon Daemon Starting provisioning Dec 16 13:10:01.327486 waagent[2635]: 2025-12-16T13:10:01.327432Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:10:01.328431 waagent[2635]: 2025-12-16T13:10:01.327691Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-e647365c22] Dec 16 13:10:01.429315 waagent[2635]: 2025-12-16T13:10:01.429272Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-e647365c22] Dec 16 13:10:01.433713 waagent[2635]: 2025-12-16T13:10:01.431982Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:10:01.433988 waagent[2635]: 2025-12-16T13:10:01.433962Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:10:01.441693 systemd-networkd[2197]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:10:01.441739 systemd-networkd[2197]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:10:01.441790 systemd-networkd[2197]: eth0: DHCP lease lost Dec 16 13:10:01.446061 systemd[2720]: Queued start job for default target default.target. Dec 16 13:10:01.452402 systemd[2720]: Created slice app.slice - User Application Slice. Dec 16 13:10:01.452436 systemd[2720]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:10:01.452449 systemd[2720]: Reached target paths.target - Paths. Dec 16 13:10:01.452482 systemd[2720]: Reached target timers.target - Timers. Dec 16 13:10:01.455878 waagent[2635]: 2025-12-16T13:10:01.453904Z INFO Daemon Daemon Create user account if not exists Dec 16 13:10:01.455793 systemd[2720]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:10:01.456747 waagent[2635]: 2025-12-16T13:10:01.456222Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:10:01.459282 waagent[2635]: 2025-12-16T13:10:01.459237Z INFO Daemon Daemon Configure sudoer Dec 16 13:10:01.461409 systemd[2720]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:10:01.465276 waagent[2635]: 2025-12-16T13:10:01.465030Z INFO Daemon Daemon Configure sshd Dec 16 13:10:01.466848 systemd[2720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:10:01.466971 systemd[2720]: Reached target sockets.target - Sockets. Dec 16 13:10:01.467696 systemd-networkd[2197]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 13:10:01.469178 systemd[2720]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:10:01.469259 systemd[2720]: Reached target basic.target - Basic System. Dec 16 13:10:01.469305 systemd[2720]: Reached target default.target - Main User Target. Dec 16 13:10:01.469323 systemd[2720]: Startup finished in 168ms. Dec 16 13:10:01.469464 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:10:01.470538 waagent[2635]: 2025-12-16T13:10:01.470498Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:10:01.473488 waagent[2635]: 2025-12-16T13:10:01.470635Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:10:01.475904 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:10:01.476627 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:10:02.651007 waagent[2635]: 2025-12-16T13:10:02.650962Z INFO Daemon Daemon Provisioning complete Dec 16 13:10:02.660670 waagent[2635]: 2025-12-16T13:10:02.660639Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:10:02.661929 waagent[2635]: 2025-12-16T13:10:02.661860Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:10:02.663686 waagent[2635]: 2025-12-16T13:10:02.663660Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:10:02.751922 waagent[2758]: 2025-12-16T13:10:02.751865Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:10:02.752163 waagent[2758]: 2025-12-16T13:10:02.751946Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 13:10:02.752163 waagent[2758]: 2025-12-16T13:10:02.751981Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:10:02.752163 waagent[2758]: 2025-12-16T13:10:02.752015Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:10:02.848157 waagent[2758]: 2025-12-16T13:10:02.848109Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:10:02.848285 waagent[2758]: 2025-12-16T13:10:02.848261Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:10:02.848345 waagent[2758]: 2025-12-16T13:10:02.848312Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:10:02.855979 waagent[2758]: 2025-12-16T13:10:02.855935Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:10:02.860945 waagent[2758]: 2025-12-16T13:10:02.860917Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:10:02.861228 waagent[2758]: 2025-12-16T13:10:02.861202Z INFO ExtHandler Dec 16 13:10:02.861264 waagent[2758]: 2025-12-16T13:10:02.861247Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: da1ec5d7-64c6-4da9-a1d1-0c7399b34042 eTag: 14674670774205934765 source: Fabric] Dec 16 13:10:02.861432 waagent[2758]: 2025-12-16T13:10:02.861411Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:10:02.861718 waagent[2758]: 2025-12-16T13:10:02.861680Z INFO ExtHandler Dec 16 13:10:02.861749 waagent[2758]: 2025-12-16T13:10:02.861730Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:10:02.864619 waagent[2758]: 2025-12-16T13:10:02.864591Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:10:02.927350 waagent[2758]: 2025-12-16T13:10:02.927283Z INFO ExtHandler Downloaded certificate {'thumbprint': '49D7A131674914D5589C2D21F4CCD8240BE0AE16', 'hasPrivateKey': True} Dec 16 13:10:02.927619 waagent[2758]: 2025-12-16T13:10:02.927593Z INFO ExtHandler Fetch goal state completed Dec 16 13:10:02.937858 waagent[2758]: 2025-12-16T13:10:02.937819Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 13:10:02.941398 waagent[2758]: 2025-12-16T13:10:02.941349Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2758 Dec 16 13:10:02.941495 waagent[2758]: 2025-12-16T13:10:02.941457Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:10:02.941730 waagent[2758]: 2025-12-16T13:10:02.941678Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:10:02.942564 waagent[2758]: 2025-12-16T13:10:02.942533Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:10:02.942860 waagent[2758]: 2025-12-16T13:10:02.942837Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:10:02.942945 waagent[2758]: 2025-12-16T13:10:02.942927Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:10:02.943285 waagent[2758]: 2025-12-16T13:10:02.943261Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:10:03.070683 waagent[2758]: 2025-12-16T13:10:03.070660Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:10:03.070840 waagent[2758]: 2025-12-16T13:10:03.070795Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:10:03.075724 waagent[2758]: 2025-12-16T13:10:03.075400Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:10:03.080206 systemd[1]: Reload requested from client PID 2773 ('systemctl') (unit waagent.service)... Dec 16 13:10:03.080220 systemd[1]: Reloading... Dec 16 13:10:03.154739 zram_generator::config[2819]: No configuration found. Dec 16 13:10:03.315496 systemd[1]: Reloading finished in 235 ms. Dec 16 13:10:03.334456 waagent[2758]: 2025-12-16T13:10:03.333789Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:10:03.334456 waagent[2758]: 2025-12-16T13:10:03.333874Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:10:03.692459 waagent[2758]: 2025-12-16T13:10:03.692389Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:10:03.692635 waagent[2758]: 2025-12-16T13:10:03.692612Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:10:03.693350 waagent[2758]: 2025-12-16T13:10:03.693205Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:10:03.693350 waagent[2758]: 2025-12-16T13:10:03.693256Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:10:03.693474 waagent[2758]: 2025-12-16T13:10:03.693451Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:10:03.693628 waagent[2758]: 2025-12-16T13:10:03.693609Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:10:03.693818 waagent[2758]: 2025-12-16T13:10:03.693796Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:10:03.694000 waagent[2758]: 2025-12-16T13:10:03.693974Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:10:03.694102 waagent[2758]: 2025-12-16T13:10:03.694076Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:10:03.694160 waagent[2758]: 2025-12-16T13:10:03.694132Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:10:03.694255 waagent[2758]: 2025-12-16T13:10:03.694237Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:10:03.694287 waagent[2758]: 2025-12-16T13:10:03.694277Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:10:03.694319 waagent[2758]: 2025-12-16T13:10:03.694305Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:10:03.694424 waagent[2758]: 2025-12-16T13:10:03.694407Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:10:03.694622 waagent[2758]: 2025-12-16T13:10:03.694603Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:10:03.694622 waagent[2758]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:10:03.694622 waagent[2758]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:10:03.694622 waagent[2758]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:10:03.694622 waagent[2758]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:10:03.694622 waagent[2758]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:10:03.694622 waagent[2758]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:10:03.694956 waagent[2758]: 2025-12-16T13:10:03.694927Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:10:03.695020 waagent[2758]: 2025-12-16T13:10:03.695003Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:10:03.695474 waagent[2758]: 2025-12-16T13:10:03.695366Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:10:03.705096 waagent[2758]: 2025-12-16T13:10:03.705070Z INFO ExtHandler ExtHandler Dec 16 13:10:03.705160 waagent[2758]: 2025-12-16T13:10:03.705120Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7755a220-6e9c-427c-92e3-831b67926457 correlation ac8a4c08-720f-41be-81d1-2a6e87984c8b created: 2025-12-16T13:08:50.489853Z] Dec 16 13:10:03.705354 waagent[2758]: 2025-12-16T13:10:03.705334Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:10:03.705667 waagent[2758]: 2025-12-16T13:10:03.705646Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:10:03.796720 waagent[2758]: 2025-12-16T13:10:03.796542Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:10:03.796720 waagent[2758]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:10:03.797160 waagent[2758]: 2025-12-16T13:10:03.797129Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0530C3F6-57AE-4A79-BF15-6F90A2DBED02;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:10:03.873027 waagent[2758]: 2025-12-16T13:10:03.872984Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:10:03.873027 waagent[2758]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.873027 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.873027 waagent[2758]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.873027 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.873027 waagent[2758]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.873027 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.873027 waagent[2758]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:10:03.873027 waagent[2758]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:10:03.873027 waagent[2758]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:10:03.875330 waagent[2758]: 2025-12-16T13:10:03.875289Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:10:03.875330 waagent[2758]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.875330 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.875330 waagent[2758]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.875330 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.875330 waagent[2758]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:10:03.875330 waagent[2758]: pkts bytes target prot opt in out source destination Dec 16 13:10:03.875330 waagent[2758]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:10:03.875330 waagent[2758]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:10:03.875330 waagent[2758]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:10:03.880465 waagent[2758]: 2025-12-16T13:10:03.880426Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:10:03.880465 waagent[2758]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:10:03.880465 waagent[2758]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:10:03.880465 waagent[2758]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:f8:d1 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e5234f8d1 Dec 16 13:10:03.880465 waagent[2758]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:f8:d1 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:10:03.880465 waagent[2758]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:10:03.880465 waagent[2758]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:10:03.880465 waagent[2758]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:10:03.880465 waagent[2758]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:10:03.880465 waagent[2758]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:10:03.880465 waagent[2758]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:f8d1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:10:07.119832 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:10:07.120802 systemd[1]: Started sshd@0-10.200.8.11:22-10.200.16.10:33682.service - OpenSSH per-connection server daemon (10.200.16.10:33682). Dec 16 13:10:07.999054 sshd[2907]: Accepted publickey for core from 10.200.16.10 port 33682 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:07.999971 sshd-session[2907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:08.004020 systemd-logind[2540]: New session 4 of user core. Dec 16 13:10:08.009874 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:10:08.420549 systemd[1]: Started sshd@1-10.200.8.11:22-10.200.16.10:33686.service - OpenSSH per-connection server daemon (10.200.16.10:33686). Dec 16 13:10:08.956266 sshd[2914]: Accepted publickey for core from 10.200.16.10 port 33686 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:08.957237 sshd-session[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:08.960912 systemd-logind[2540]: New session 5 of user core. Dec 16 13:10:08.966871 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:10:09.260638 sshd[2918]: Connection closed by 10.200.16.10 port 33686 Dec 16 13:10:09.260998 sshd-session[2914]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:09.263896 systemd[1]: sshd@1-10.200.8.11:22-10.200.16.10:33686.service: Deactivated successfully. Dec 16 13:10:09.265191 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:10:09.265837 systemd-logind[2540]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:10:09.266795 systemd-logind[2540]: Removed session 5. Dec 16 13:10:09.371359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:10:09.372382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:09.374949 systemd[1]: Started sshd@2-10.200.8.11:22-10.200.16.10:33688.service - OpenSSH per-connection server daemon (10.200.16.10:33688). Dec 16 13:10:09.913985 sshd[2925]: Accepted publickey for core from 10.200.16.10 port 33688 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:09.914942 sshd-session[2925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:09.918759 systemd-logind[2540]: New session 6 of user core. Dec 16 13:10:09.926867 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:10:09.978035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:09.983921 (kubelet)[2937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:10:10.016277 kubelet[2937]: E1216 13:10:10.016241 2937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:10:10.018761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:10:10.018884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:10:10.019210 systemd[1]: kubelet.service: Consumed 122ms CPU time, 110.7M memory peak. Dec 16 13:10:10.218151 sshd[2931]: Connection closed by 10.200.16.10 port 33688 Dec 16 13:10:10.218785 sshd-session[2925]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:10.221236 systemd[1]: sshd@2-10.200.8.11:22-10.200.16.10:33688.service: Deactivated successfully. Dec 16 13:10:10.222329 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:10:10.222950 systemd-logind[2540]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:10:10.223857 systemd-logind[2540]: Removed session 6. Dec 16 13:10:10.326608 systemd[1]: Started sshd@3-10.200.8.11:22-10.200.16.10:48184.service - OpenSSH per-connection server daemon (10.200.16.10:48184). Dec 16 13:10:10.859963 sshd[2949]: Accepted publickey for core from 10.200.16.10 port 48184 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:10.860878 sshd-session[2949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:10.864625 systemd-logind[2540]: New session 7 of user core. Dec 16 13:10:10.870843 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:10:11.162892 sshd[2953]: Connection closed by 10.200.16.10 port 48184 Dec 16 13:10:11.163198 sshd-session[2949]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:11.165833 systemd[1]: sshd@3-10.200.8.11:22-10.200.16.10:48184.service: Deactivated successfully. Dec 16 13:10:11.166958 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:10:11.167513 systemd-logind[2540]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:10:11.168430 systemd-logind[2540]: Removed session 7. Dec 16 13:10:11.272594 systemd[1]: Started sshd@4-10.200.8.11:22-10.200.16.10:48188.service - OpenSSH per-connection server daemon (10.200.16.10:48188). Dec 16 13:10:11.802754 sshd[2959]: Accepted publickey for core from 10.200.16.10 port 48188 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:11.803580 sshd-session[2959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:11.807048 systemd-logind[2540]: New session 8 of user core. Dec 16 13:10:11.812844 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:10:12.413201 sudo[2964]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:10:12.413420 sudo[2964]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:10:12.477252 sudo[2964]: pam_unix(sudo:session): session closed for user root Dec 16 13:10:12.577110 sshd[2963]: Connection closed by 10.200.16.10 port 48188 Dec 16 13:10:12.577550 sshd-session[2959]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:12.580327 systemd[1]: sshd@4-10.200.8.11:22-10.200.16.10:48188.service: Deactivated successfully. Dec 16 13:10:12.581551 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:10:12.582125 systemd-logind[2540]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:10:12.583191 systemd-logind[2540]: Removed session 8. Dec 16 13:10:12.690666 systemd[1]: Started sshd@5-10.200.8.11:22-10.200.16.10:48202.service - OpenSSH per-connection server daemon (10.200.16.10:48202). Dec 16 13:10:13.227637 sshd[2971]: Accepted publickey for core from 10.200.16.10 port 48202 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:13.228493 sshd-session[2971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:13.232009 systemd-logind[2540]: New session 9 of user core. Dec 16 13:10:13.237835 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:10:13.431538 sudo[2977]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:10:13.431764 sudo[2977]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:10:13.435832 sudo[2977]: pam_unix(sudo:session): session closed for user root Dec 16 13:10:13.439736 sudo[2976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:10:13.439937 sudo[2976]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:10:13.445025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:10:13.474016 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 13:10:13.474075 kernel: audit: type=1305 audit(1765890613.470:255): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:10:13.474095 kernel: audit: type=1300 audit(1765890613.470:255): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe006d45d0 a2=420 a3=0 items=0 ppid=2982 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:13.470000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:10:13.470000 audit[3001]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe006d45d0 a2=420 a3=0 items=0 ppid=2982 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:13.474361 augenrules[3001]: No rules Dec 16 13:10:13.475939 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:10:13.476204 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:10:13.479426 sudo[2976]: pam_unix(sudo:session): session closed for user root Dec 16 13:10:13.470000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:10:13.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.484004 kernel: audit: type=1327 audit(1765890613.470:255): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:10:13.484043 kernel: audit: type=1130 audit(1765890613.474:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.486768 kernel: audit: type=1131 audit(1765890613.474:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.477000 audit[2976]: USER_END pid=2976 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.489897 kernel: audit: type=1106 audit(1765890613.477:258): pid=2976 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.477000 audit[2976]: CRED_DISP pid=2976 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.492600 kernel: audit: type=1104 audit(1765890613.477:259): pid=2976 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.579287 sshd[2975]: Connection closed by 10.200.16.10 port 48202 Dec 16 13:10:13.579606 sshd-session[2971]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:13.578000 audit[2971]: USER_END pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:13.582384 systemd-logind[2540]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:10:13.582846 systemd[1]: sshd@5-10.200.8.11:22-10.200.16.10:48202.service: Deactivated successfully. Dec 16 13:10:13.585757 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:10:13.578000 audit[2971]: CRED_DISP pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:13.587673 systemd-logind[2540]: Removed session 9. Dec 16 13:10:13.589910 kernel: audit: type=1106 audit(1765890613.578:260): pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:13.589943 kernel: audit: type=1104 audit(1765890613.578:261): pid=2971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:13.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.11:22-10.200.16.10:48202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.592953 kernel: audit: type=1131 audit(1765890613.581:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.11:22-10.200.16.10:48202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:13.695511 systemd[1]: Started sshd@6-10.200.8.11:22-10.200.16.10:48208.service - OpenSSH per-connection server daemon (10.200.16.10:48208). Dec 16 13:10:13.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.11:22-10.200.16.10:48208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:14.228000 audit[3010]: USER_ACCT pid=3010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:14.230368 sshd[3010]: Accepted publickey for core from 10.200.16.10 port 48208 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:10:14.229000 audit[3010]: CRED_ACQ pid=3010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:14.229000 audit[3010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc871f9830 a2=3 a3=0 items=0 ppid=1 pid=3010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:14.229000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:14.231264 sshd-session[3010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:14.234727 systemd-logind[2540]: New session 10 of user core. Dec 16 13:10:14.240824 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:10:14.240000 audit[3010]: USER_START pid=3010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:14.241000 audit[3014]: CRED_ACQ pid=3014 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:14.433000 audit[3015]: USER_ACCT pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:14.435035 sudo[3015]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:10:14.433000 audit[3015]: CRED_REFR pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:14.435254 sudo[3015]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:10:14.433000 audit[3015]: USER_START pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:17.166802 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:10:17.177010 (dockerd)[3033]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:10:19.026496 dockerd[3033]: time="2025-12-16T13:10:19.026453789Z" level=info msg="Starting up" Dec 16 13:10:19.027228 dockerd[3033]: time="2025-12-16T13:10:19.027207121Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:10:19.037228 dockerd[3033]: time="2025-12-16T13:10:19.037192291Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:10:19.125926 systemd[1]: var-lib-docker-metacopy\x2dcheck3464398592-merged.mount: Deactivated successfully. Dec 16 13:10:19.146510 dockerd[3033]: time="2025-12-16T13:10:19.146471219Z" level=info msg="Loading containers: start." Dec 16 13:10:19.204743 kernel: Initializing XFRM netlink socket Dec 16 13:10:19.267000 audit[3079]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.269992 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 16 13:10:19.270030 kernel: audit: type=1325 audit(1765890619.267:272): table=nat:5 family=2 entries=2 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.267000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd8db24230 a2=0 a3=0 items=0 ppid=3033 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.277064 kernel: audit: type=1300 audit(1765890619.267:272): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd8db24230 a2=0 a3=0 items=0 ppid=3033 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:10:19.270000 audit[3081]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.285344 kernel: audit: type=1327 audit(1765890619.267:272): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:10:19.285397 kernel: audit: type=1325 audit(1765890619.270:273): table=filter:6 family=2 entries=2 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.270000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde0915f50 a2=0 a3=0 items=0 ppid=3033 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.290656 kernel: audit: type=1300 audit(1765890619.270:273): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde0915f50 a2=0 a3=0 items=0 ppid=3033 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:10:19.293775 kernel: audit: type=1327 audit(1765890619.270:273): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:10:19.276000 audit[3083]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.296553 kernel: audit: type=1325 audit(1765890619.276:274): table=filter:7 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.276000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3ed58af0 a2=0 a3=0 items=0 ppid=3033 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.301326 kernel: audit: type=1300 audit(1765890619.276:274): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3ed58af0 a2=0 a3=0 items=0 ppid=3033 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:10:19.304456 kernel: audit: type=1327 audit(1765890619.276:274): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:10:19.277000 audit[3085]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.307407 kernel: audit: type=1325 audit(1765890619.277:275): table=filter:8 family=2 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.277000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff55b0e150 a2=0 a3=0 items=0 ppid=3033 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:10:19.280000 audit[3087]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.280000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5ce26170 a2=0 a3=0 items=0 ppid=3033 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:10:19.281000 audit[3089]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.281000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc15e1d830 a2=0 a3=0 items=0 ppid=3033 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:10:19.281000 audit[3091]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.281000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1add9f00 a2=0 a3=0 items=0 ppid=3033 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:10:19.285000 audit[3093]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.285000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe60545fe0 a2=0 a3=0 items=0 ppid=3033 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:10:19.346000 audit[3096]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.346000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdcd777300 a2=0 a3=0 items=0 ppid=3033 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:10:19.349000 audit[3098]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.349000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffddd45e9c0 a2=0 a3=0 items=0 ppid=3033 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:10:19.351000 audit[3100]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.351000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe7ba381c0 a2=0 a3=0 items=0 ppid=3033 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:10:19.353000 audit[3102]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.353000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc9aa216f0 a2=0 a3=0 items=0 ppid=3033 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:10:19.354000 audit[3104]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.354000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb7c443e0 a2=0 a3=0 items=0 ppid=3033 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:10:19.404000 audit[3134]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.404000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff28e3ccc0 a2=0 a3=0 items=0 ppid=3033 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:10:19.405000 audit[3136]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.405000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc0435ae00 a2=0 a3=0 items=0 ppid=3033 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:10:19.407000 audit[3138]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.407000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a057070 a2=0 a3=0 items=0 ppid=3033 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:10:19.408000 audit[3140]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.408000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc16eb0d00 a2=0 a3=0 items=0 ppid=3033 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:10:19.409000 audit[3142]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.409000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0da2cbe0 a2=0 a3=0 items=0 ppid=3033 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:10:19.411000 audit[3144]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.411000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2afb0c20 a2=0 a3=0 items=0 ppid=3033 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:10:19.412000 audit[3146]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.412000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc26a785a0 a2=0 a3=0 items=0 ppid=3033 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:10:19.414000 audit[3148]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.414000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff43b68fc0 a2=0 a3=0 items=0 ppid=3033 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:10:19.416000 audit[3150]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.416000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcb2f5fb70 a2=0 a3=0 items=0 ppid=3033 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.416000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:10:19.418000 audit[3152]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.418000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc850453e0 a2=0 a3=0 items=0 ppid=3033 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:10:19.419000 audit[3154]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.419000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd98220580 a2=0 a3=0 items=0 ppid=3033 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:10:19.421000 audit[3156]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.421000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdf7536c10 a2=0 a3=0 items=0 ppid=3033 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:10:19.422000 audit[3158]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.422000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd99713110 a2=0 a3=0 items=0 ppid=3033 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:10:19.427000 audit[3163]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.427000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdaa7bec60 a2=0 a3=0 items=0 ppid=3033 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.427000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:10:19.428000 audit[3165]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.428000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed18c9ac0 a2=0 a3=0 items=0 ppid=3033 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.428000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:10:19.430000 audit[3167]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.430000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe1eb83430 a2=0 a3=0 items=0 ppid=3033 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.430000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:10:19.432000 audit[3169]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.432000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7911b050 a2=0 a3=0 items=0 ppid=3033 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:10:19.433000 audit[3171]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.433000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcb67e9900 a2=0 a3=0 items=0 ppid=3033 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:10:19.435000 audit[3173]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:19.435000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffef17266e0 a2=0 a3=0 items=0 ppid=3033 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.435000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:10:19.495000 audit[3178]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.495000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff5f84ee90 a2=0 a3=0 items=0 ppid=3033 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:10:19.497000 audit[3180]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.497000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc2622a3b0 a2=0 a3=0 items=0 ppid=3033 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:10:19.503000 audit[3188]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.503000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe14f1ad90 a2=0 a3=0 items=0 ppid=3033 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.503000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:10:19.507000 audit[3193]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.507000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc7a31b90 a2=0 a3=0 items=0 ppid=3033 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.507000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:10:19.509000 audit[3195]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.509000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff82d88260 a2=0 a3=0 items=0 ppid=3033 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.509000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:10:19.510000 audit[3197]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.510000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffefe0ca160 a2=0 a3=0 items=0 ppid=3033 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.510000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:10:19.512000 audit[3199]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.512000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe15dda830 a2=0 a3=0 items=0 ppid=3033 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:10:19.514000 audit[3201]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:19.514000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb5d7d660 a2=0 a3=0 items=0 ppid=3033 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:19.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:10:19.516630 systemd-networkd[2197]: docker0: Link UP Dec 16 13:10:19.529746 dockerd[3033]: time="2025-12-16T13:10:19.529671790Z" level=info msg="Loading containers: done." Dec 16 13:10:19.540064 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3079287586-merged.mount: Deactivated successfully. Dec 16 13:10:19.601958 dockerd[3033]: time="2025-12-16T13:10:19.601932569Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:10:19.602061 dockerd[3033]: time="2025-12-16T13:10:19.601983328Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:10:19.602061 dockerd[3033]: time="2025-12-16T13:10:19.602038493Z" level=info msg="Initializing buildkit" Dec 16 13:10:19.639753 dockerd[3033]: time="2025-12-16T13:10:19.639718140Z" level=info msg="Completed buildkit initialization" Dec 16 13:10:19.644964 dockerd[3033]: time="2025-12-16T13:10:19.644927183Z" level=info msg="Daemon has completed initialization" Dec 16 13:10:19.645109 dockerd[3033]: time="2025-12-16T13:10:19.645024273Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:10:19.645159 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:10:19.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:20.055153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:10:20.056379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:20.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:20.488561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:20.501923 (kubelet)[3249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:10:20.531489 kubelet[3249]: E1216 13:10:20.531441 3249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:10:20.532860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:10:20.532979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:10:20.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:10:20.533304 systemd[1]: kubelet.service: Consumed 112ms CPU time, 109.9M memory peak. Dec 16 13:10:20.643530 containerd[2582]: time="2025-12-16T13:10:20.643484218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:10:21.621649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562024753.mount: Deactivated successfully. Dec 16 13:10:21.686272 chronyd[2519]: Selected source PHC0 Dec 16 13:10:22.484903 containerd[2582]: time="2025-12-16T13:10:22.484870499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:22.487459 containerd[2582]: time="2025-12-16T13:10:22.487324627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27404397" Dec 16 13:10:22.489995 containerd[2582]: time="2025-12-16T13:10:22.489974240Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:22.493553 containerd[2582]: time="2025-12-16T13:10:22.493527392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:22.494474 containerd[2582]: time="2025-12-16T13:10:22.494110144Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.85057573s" Dec 16 13:10:22.494474 containerd[2582]: time="2025-12-16T13:10:22.494136782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:10:22.494684 containerd[2582]: time="2025-12-16T13:10:22.494663963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:10:23.727397 containerd[2582]: time="2025-12-16T13:10:23.727364968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:23.729838 containerd[2582]: time="2025-12-16T13:10:23.729810180Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 16 13:10:23.733044 containerd[2582]: time="2025-12-16T13:10:23.733012074Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:23.740497 containerd[2582]: time="2025-12-16T13:10:23.739910935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:23.740497 containerd[2582]: time="2025-12-16T13:10:23.740363860Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.245671695s" Dec 16 13:10:23.740497 containerd[2582]: time="2025-12-16T13:10:23.740385217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:10:23.740939 containerd[2582]: time="2025-12-16T13:10:23.740914122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:10:24.815525 containerd[2582]: time="2025-12-16T13:10:24.815487157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:24.817900 containerd[2582]: time="2025-12-16T13:10:24.817867534Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 13:10:24.824250 containerd[2582]: time="2025-12-16T13:10:24.824222716Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:24.828958 containerd[2582]: time="2025-12-16T13:10:24.828763159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:24.829383 containerd[2582]: time="2025-12-16T13:10:24.829361255Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.088421663s" Dec 16 13:10:24.829417 containerd[2582]: time="2025-12-16T13:10:24.829389739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:10:24.829961 containerd[2582]: time="2025-12-16T13:10:24.829935195Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:10:25.647579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667935173.mount: Deactivated successfully. Dec 16 13:10:25.983554 containerd[2582]: time="2025-12-16T13:10:25.983468980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:25.986070 containerd[2582]: time="2025-12-16T13:10:25.986039502Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=0" Dec 16 13:10:25.988758 containerd[2582]: time="2025-12-16T13:10:25.988720915Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:25.992216 containerd[2582]: time="2025-12-16T13:10:25.992181046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:25.992556 containerd[2582]: time="2025-12-16T13:10:25.992429523Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.162397758s" Dec 16 13:10:25.992556 containerd[2582]: time="2025-12-16T13:10:25.992455271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:10:25.992864 containerd[2582]: time="2025-12-16T13:10:25.992849470Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:10:26.689545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027718441.mount: Deactivated successfully. Dec 16 13:10:27.511067 containerd[2582]: time="2025-12-16T13:10:27.511034654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:27.515180 containerd[2582]: time="2025-12-16T13:10:27.515067011Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Dec 16 13:10:27.518096 containerd[2582]: time="2025-12-16T13:10:27.518074616Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:27.521806 containerd[2582]: time="2025-12-16T13:10:27.521767426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:27.522393 containerd[2582]: time="2025-12-16T13:10:27.522298321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.529357761s" Dec 16 13:10:27.522393 containerd[2582]: time="2025-12-16T13:10:27.522322009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:10:27.523025 containerd[2582]: time="2025-12-16T13:10:27.523006078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:10:28.063935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233584128.mount: Deactivated successfully. Dec 16 13:10:28.082801 containerd[2582]: time="2025-12-16T13:10:28.082774052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:10:28.085231 containerd[2582]: time="2025-12-16T13:10:28.085203955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:10:28.088193 containerd[2582]: time="2025-12-16T13:10:28.088161025Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:10:28.091767 containerd[2582]: time="2025-12-16T13:10:28.091732830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:10:28.092315 containerd[2582]: time="2025-12-16T13:10:28.092059259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 569.030925ms" Dec 16 13:10:28.092315 containerd[2582]: time="2025-12-16T13:10:28.092081391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:10:28.092514 containerd[2582]: time="2025-12-16T13:10:28.092500416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:10:28.791119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371508252.mount: Deactivated successfully. Dec 16 13:10:30.320415 containerd[2582]: time="2025-12-16T13:10:30.320375373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:30.322971 containerd[2582]: time="2025-12-16T13:10:30.322946362Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Dec 16 13:10:30.326075 containerd[2582]: time="2025-12-16T13:10:30.326036909Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:30.329721 containerd[2582]: time="2025-12-16T13:10:30.329648339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:30.330521 containerd[2582]: time="2025-12-16T13:10:30.330502202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.23796446s" Dec 16 13:10:30.330573 containerd[2582]: time="2025-12-16T13:10:30.330524828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:10:30.555193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:10:30.556509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:31.039978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:31.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:31.043108 kernel: kauditd_printk_skb: 113 callbacks suppressed Dec 16 13:10:31.043193 kernel: audit: type=1130 audit(1765890631.041:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:31.051887 (kubelet)[3460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:10:31.084317 kubelet[3460]: E1216 13:10:31.084271 3460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:10:31.085620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:10:31.085777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:10:31.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:10:31.086352 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.8M memory peak. Dec 16 13:10:31.090744 kernel: audit: type=1131 audit(1765890631.085:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:10:32.085650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:32.085819 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.8M memory peak. Dec 16 13:10:32.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:32.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:32.095075 kernel: audit: type=1130 audit(1765890632.085:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:32.095138 kernel: audit: type=1131 audit(1765890632.085:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:32.091941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:32.114443 systemd[1]: Reload requested from client PID 3485 ('systemctl') (unit session-10.scope)... Dec 16 13:10:32.114455 systemd[1]: Reloading... Dec 16 13:10:32.210743 zram_generator::config[3537]: No configuration found. Dec 16 13:10:32.381171 systemd[1]: Reloading finished in 266 ms. Dec 16 13:10:32.406059 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:10:32.406129 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:10:32.406398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:32.406449 systemd[1]: kubelet.service: Consumed 62ms CPU time, 69.8M memory peak. Dec 16 13:10:32.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:10:32.410951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:32.411776 kernel: audit: type=1130 audit(1765890632.405:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:10:32.410000 audit: BPF prog-id=87 op=LOAD Dec 16 13:10:32.410000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:10:32.414798 kernel: audit: type=1334 audit(1765890632.410:320): prog-id=87 op=LOAD Dec 16 13:10:32.414881 kernel: audit: type=1334 audit(1765890632.410:321): prog-id=78 op=UNLOAD Dec 16 13:10:32.411000 audit: BPF prog-id=88 op=LOAD Dec 16 13:10:32.416231 kernel: audit: type=1334 audit(1765890632.411:322): prog-id=88 op=LOAD Dec 16 13:10:32.416275 kernel: audit: type=1334 audit(1765890632.411:323): prog-id=89 op=LOAD Dec 16 13:10:32.411000 audit: BPF prog-id=89 op=LOAD Dec 16 13:10:32.411000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:10:32.418744 kernel: audit: type=1334 audit(1765890632.411:324): prog-id=79 op=UNLOAD Dec 16 13:10:32.411000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=90 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=91 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=92 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=93 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=94 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=95 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=96 op=LOAD Dec 16 13:10:32.418000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:10:32.418000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:10:32.419000 audit: BPF prog-id=97 op=LOAD Dec 16 13:10:32.419000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:10:32.420000 audit: BPF prog-id=98 op=LOAD Dec 16 13:10:32.420000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:10:32.420000 audit: BPF prog-id=99 op=LOAD Dec 16 13:10:32.420000 audit: BPF prog-id=100 op=LOAD Dec 16 13:10:32.420000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:10:32.420000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:10:32.421000 audit: BPF prog-id=101 op=LOAD Dec 16 13:10:32.421000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:10:32.421000 audit: BPF prog-id=102 op=LOAD Dec 16 13:10:32.421000 audit: BPF prog-id=103 op=LOAD Dec 16 13:10:32.421000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:10:32.421000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:10:32.422000 audit: BPF prog-id=104 op=LOAD Dec 16 13:10:32.422000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:10:32.422000 audit: BPF prog-id=105 op=LOAD Dec 16 13:10:32.422000 audit: BPF prog-id=106 op=LOAD Dec 16 13:10:32.422000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:10:32.422000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:10:33.026513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:33.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:33.035352 (kubelet)[3601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:10:33.068392 kubelet[3601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:10:33.068392 kubelet[3601]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:10:33.068392 kubelet[3601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:10:33.068621 kubelet[3601]: I1216 13:10:33.068445 3601 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:10:33.233111 kubelet[3601]: I1216 13:10:33.233089 3601 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:10:33.233111 kubelet[3601]: I1216 13:10:33.233105 3601 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:10:33.233309 kubelet[3601]: I1216 13:10:33.233297 3601 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:10:33.260070 kubelet[3601]: E1216 13:10:33.260043 3601 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:10:33.262480 kubelet[3601]: I1216 13:10:33.262460 3601 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:10:33.271233 kubelet[3601]: I1216 13:10:33.271216 3601 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:10:33.273475 kubelet[3601]: I1216 13:10:33.273460 3601 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:10:33.273655 kubelet[3601]: I1216 13:10:33.273631 3601 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:10:33.273818 kubelet[3601]: I1216 13:10:33.273653 3601 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-e647365c22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:10:33.273918 kubelet[3601]: I1216 13:10:33.273824 3601 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:10:33.273918 kubelet[3601]: I1216 13:10:33.273833 3601 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:10:33.273959 kubelet[3601]: I1216 13:10:33.273926 3601 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:10:33.276290 kubelet[3601]: I1216 13:10:33.276276 3601 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:10:33.278093 kubelet[3601]: I1216 13:10:33.277647 3601 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:10:33.278093 kubelet[3601]: I1216 13:10:33.277673 3601 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:10:33.278093 kubelet[3601]: I1216 13:10:33.277682 3601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:10:33.286356 kubelet[3601]: W1216 13:10:33.286322 3601 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-e647365c22&limit=500&resourceVersion=0": dial tcp 10.200.8.11:6443: connect: connection refused Dec 16 13:10:33.286488 kubelet[3601]: E1216 13:10:33.286462 3601 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-e647365c22&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:10:33.286577 kubelet[3601]: I1216 13:10:33.286561 3601 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:10:33.286831 kubelet[3601]: I1216 13:10:33.286817 3601 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:10:33.288582 kubelet[3601]: W1216 13:10:33.288544 3601 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.11:6443: connect: connection refused Dec 16 13:10:33.288900 kubelet[3601]: E1216 13:10:33.288581 3601 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:10:33.288957 kubelet[3601]: W1216 13:10:33.288937 3601 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:10:33.290662 kubelet[3601]: I1216 13:10:33.290639 3601 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:10:33.290746 kubelet[3601]: I1216 13:10:33.290673 3601 server.go:1287] "Started kubelet" Dec 16 13:10:33.290848 kubelet[3601]: I1216 13:10:33.290821 3601 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:10:33.292437 kubelet[3601]: I1216 13:10:33.292010 3601 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:10:33.293852 kubelet[3601]: I1216 13:10:33.293810 3601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:10:33.293967 kubelet[3601]: I1216 13:10:33.293951 3601 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:10:33.294107 kubelet[3601]: I1216 13:10:33.294095 3601 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:10:33.294187 kubelet[3601]: I1216 13:10:33.293895 3601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:10:33.296000 audit[3612]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.296000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe1143c930 a2=0 a3=0 items=0 ppid=3601 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:10:33.297000 audit[3613]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.297000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff01f1dc60 a2=0 a3=0 items=0 ppid=3601 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:10:33.298906 kubelet[3601]: E1216 13:10:33.297804 3601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-e647365c22.1881b42cc57a9ab9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-e647365c22,UID:ci-4547.0.0-a-e647365c22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-e647365c22,},FirstTimestamp:2025-12-16 13:10:33.290652345 +0000 UTC m=+0.252406195,LastTimestamp:2025-12-16 13:10:33.290652345 +0000 UTC m=+0.252406195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-e647365c22,}" Dec 16 13:10:33.299935 kubelet[3601]: I1216 13:10:33.299400 3601 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:10:33.299935 kubelet[3601]: I1216 13:10:33.299464 3601 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:10:33.299935 kubelet[3601]: I1216 13:10:33.299498 3601 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:10:33.300112 kubelet[3601]: E1216 13:10:33.300099 3601 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-e647365c22\" not found" Dec 16 13:10:33.300286 kubelet[3601]: W1216 13:10:33.300259 3601 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.11:6443: connect: connection refused Dec 16 13:10:33.300322 kubelet[3601]: E1216 13:10:33.300297 3601 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:10:33.300524 kubelet[3601]: I1216 13:10:33.300509 3601 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:10:33.300573 kubelet[3601]: I1216 13:10:33.300563 3601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:10:33.300000 audit[3615]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.300000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffea27f6420 a2=0 a3=0 items=0 ppid=3601 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:10:33.301679 kubelet[3601]: E1216 13:10:33.301659 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-e647365c22?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="200ms" Dec 16 13:10:33.302208 kubelet[3601]: E1216 13:10:33.302192 3601 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:10:33.302294 kubelet[3601]: I1216 13:10:33.302283 3601 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:10:33.302000 audit[3617]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.302000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff3bfe35f0 a2=0 a3=0 items=0 ppid=3601 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:10:33.325121 kubelet[3601]: I1216 13:10:33.325111 3601 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:10:33.325203 kubelet[3601]: I1216 13:10:33.325197 3601 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:10:33.325263 kubelet[3601]: I1216 13:10:33.325242 3601 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:10:33.332039 kubelet[3601]: I1216 13:10:33.331905 3601 policy_none.go:49] "None policy: Start" Dec 16 13:10:33.332039 kubelet[3601]: I1216 13:10:33.331918 3601 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:10:33.332039 kubelet[3601]: I1216 13:10:33.331925 3601 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:10:33.338999 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:10:33.348408 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:10:33.354000 audit[3624]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.354000 audit[3624]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe7ae580e0 a2=0 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 13:10:33.355366 kubelet[3601]: I1216 13:10:33.355345 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:10:33.355000 audit[3625]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:33.355000 audit[3625]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdec2880a0 a2=0 a3=0 items=0 ppid=3601 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:10:33.355000 audit[3626]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.355000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc58f835d0 a2=0 a3=0 items=0 ppid=3601 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:10:33.356869 kubelet[3601]: I1216 13:10:33.356770 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:10:33.356869 kubelet[3601]: I1216 13:10:33.356786 3601 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:10:33.356869 kubelet[3601]: I1216 13:10:33.356802 3601 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:10:33.356869 kubelet[3601]: I1216 13:10:33.356808 3601 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:10:33.356869 kubelet[3601]: E1216 13:10:33.356843 3601 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:10:33.358190 kubelet[3601]: W1216 13:10:33.358131 3601 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.11:6443: connect: connection refused Dec 16 13:10:33.358190 kubelet[3601]: E1216 13:10:33.358169 3601 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:10:33.357000 audit[3627]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:33.357000 audit[3627]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1bcfdc40 a2=0 a3=0 items=0 ppid=3601 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:10:33.358000 audit[3628]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.358000 audit[3628]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcec26150 a2=0 a3=0 items=0 ppid=3601 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:10:33.360744 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:10:33.361000 audit[3630]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:33.361000 audit[3630]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff06fe03c0 a2=0 a3=0 items=0 ppid=3601 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:10:33.361000 audit[3631]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:33.361000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf3667360 a2=0 a3=0 items=0 ppid=3601 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:10:33.362000 audit[3632]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:33.362000 audit[3632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcde8c8650 a2=0 a3=0 items=0 ppid=3601 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:10:33.363525 kubelet[3601]: I1216 13:10:33.363502 3601 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:10:33.363646 kubelet[3601]: I1216 13:10:33.363632 3601 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:10:33.363674 kubelet[3601]: I1216 13:10:33.363646 3601 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:10:33.364307 kubelet[3601]: I1216 13:10:33.363996 3601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:10:33.365110 kubelet[3601]: E1216 13:10:33.365064 3601 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:10:33.365110 kubelet[3601]: E1216 13:10:33.365105 3601 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-e647365c22\" not found" Dec 16 13:10:33.464215 systemd[1]: Created slice kubepods-burstable-pod324258d31b2079042d231a40dd991443.slice - libcontainer container kubepods-burstable-pod324258d31b2079042d231a40dd991443.slice. Dec 16 13:10:33.465518 kubelet[3601]: I1216 13:10:33.465493 3601 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.465944 kubelet[3601]: E1216 13:10:33.465921 3601 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.475299 kubelet[3601]: E1216 13:10:33.475281 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.477495 systemd[1]: Created slice kubepods-burstable-podb31f48cb56488f5c70ccc41753afb75e.slice - libcontainer container kubepods-burstable-podb31f48cb56488f5c70ccc41753afb75e.slice. Dec 16 13:10:33.484443 kubelet[3601]: E1216 13:10:33.484430 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.486234 systemd[1]: Created slice kubepods-burstable-pod85ec924e28f5e66529f5c0c63010b93c.slice - libcontainer container kubepods-burstable-pod85ec924e28f5e66529f5c0c63010b93c.slice. Dec 16 13:10:33.487775 kubelet[3601]: E1216 13:10:33.487759 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.502150 kubelet[3601]: E1216 13:10:33.502128 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-e647365c22?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="400ms" Dec 16 13:10:33.600023 kubelet[3601]: I1216 13:10:33.600000 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/324258d31b2079042d231a40dd991443-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-e647365c22\" (UID: \"324258d31b2079042d231a40dd991443\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600099 kubelet[3601]: I1216 13:10:33.600029 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600099 kubelet[3601]: I1216 13:10:33.600047 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600099 kubelet[3601]: I1216 13:10:33.600062 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600099 kubelet[3601]: I1216 13:10:33.600078 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600099 kubelet[3601]: I1216 13:10:33.600093 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600253 kubelet[3601]: I1216 13:10:33.600106 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600253 kubelet[3601]: I1216 13:10:33.600121 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.600253 kubelet[3601]: I1216 13:10:33.600136 3601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.666975 kubelet[3601]: I1216 13:10:33.666958 3601 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.667226 kubelet[3601]: E1216 13:10:33.667208 3601 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:33.776213 containerd[2582]: time="2025-12-16T13:10:33.776171222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-e647365c22,Uid:324258d31b2079042d231a40dd991443,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:33.785621 containerd[2582]: time="2025-12-16T13:10:33.785598773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-e647365c22,Uid:b31f48cb56488f5c70ccc41753afb75e,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:33.789232 containerd[2582]: time="2025-12-16T13:10:33.789206998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-e647365c22,Uid:85ec924e28f5e66529f5c0c63010b93c,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:33.837348 containerd[2582]: time="2025-12-16T13:10:33.837320145Z" level=info msg="connecting to shim 815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f" address="unix:///run/containerd/s/4296a12b97415c5575979f84c90fb4bef6e8db2b4d56c0b3f281db6d16dfdab4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:33.864531 containerd[2582]: time="2025-12-16T13:10:33.864008183Z" level=info msg="connecting to shim 680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245" address="unix:///run/containerd/s/4834d2cd1f19e9f6eaa00046605fa1a036e3e38ac63137bc3da93043b78f310f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:33.864922 systemd[1]: Started cri-containerd-815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f.scope - libcontainer container 815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f. Dec 16 13:10:33.883065 containerd[2582]: time="2025-12-16T13:10:33.883042210Z" level=info msg="connecting to shim c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663" address="unix:///run/containerd/s/8e6fed248fc5001e4c5e7627e2380c27e8ca8238c554ad494ab2a1659d75965b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:33.882000 audit: BPF prog-id=107 op=LOAD Dec 16 13:10:33.883000 audit: BPF prog-id=108 op=LOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=109 op=LOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=110 op=LOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=110 op=UNLOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.883000 audit: BPF prog-id=111 op=LOAD Dec 16 13:10:33.883000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3642 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831356565646239363439666563373834363733663964333536306263 Dec 16 13:10:33.902844 systemd[1]: Started cri-containerd-680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245.scope - libcontainer container 680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245. Dec 16 13:10:33.903197 kubelet[3601]: E1216 13:10:33.902842 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-e647365c22?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="800ms" Dec 16 13:10:33.908010 systemd[1]: Started cri-containerd-c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663.scope - libcontainer container c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663. Dec 16 13:10:33.922000 audit: BPF prog-id=112 op=LOAD Dec 16 13:10:33.923000 audit: BPF prog-id=113 op=LOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=114 op=LOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=115 op=LOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=115 op=UNLOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.923000 audit: BPF prog-id=116 op=LOAD Dec 16 13:10:33.923000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3674 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303931336236303665653639366131666539633338643239666133 Dec 16 13:10:33.926000 audit: BPF prog-id=117 op=LOAD Dec 16 13:10:33.926000 audit: BPF prog-id=118 op=LOAD Dec 16 13:10:33.926000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.926000 audit: BPF prog-id=118 op=UNLOAD Dec 16 13:10:33.926000 audit[3724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.927000 audit: BPF prog-id=119 op=LOAD Dec 16 13:10:33.927000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.927000 audit: BPF prog-id=120 op=LOAD Dec 16 13:10:33.927000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.927000 audit: BPF prog-id=120 op=UNLOAD Dec 16 13:10:33.927000 audit[3724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.927000 audit: BPF prog-id=119 op=UNLOAD Dec 16 13:10:33.927000 audit[3724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.927000 audit: BPF prog-id=121 op=LOAD Dec 16 13:10:33.927000 audit[3724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3706 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:33.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338393364386231646661326534666635633030323832626662393161 Dec 16 13:10:33.942293 containerd[2582]: time="2025-12-16T13:10:33.942161493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-e647365c22,Uid:324258d31b2079042d231a40dd991443,Namespace:kube-system,Attempt:0,} returns sandbox id \"815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f\"" Dec 16 13:10:33.948417 containerd[2582]: time="2025-12-16T13:10:33.948393402Z" level=info msg="CreateContainer within sandbox \"815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:10:33.968025 containerd[2582]: time="2025-12-16T13:10:33.968007139Z" level=info msg="Container 89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:33.981521 containerd[2582]: time="2025-12-16T13:10:33.981474241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-e647365c22,Uid:b31f48cb56488f5c70ccc41753afb75e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663\"" Dec 16 13:10:33.982888 containerd[2582]: time="2025-12-16T13:10:33.982686109Z" level=info msg="CreateContainer within sandbox \"c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:10:34.003545 containerd[2582]: time="2025-12-16T13:10:34.003522934Z" level=info msg="CreateContainer within sandbox \"815eedb9649fec784673f9d3560bc63c9c97c71c736cc467e7f5e8b448dc276f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684\"" Dec 16 13:10:34.003882 containerd[2582]: time="2025-12-16T13:10:34.003858407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-e647365c22,Uid:85ec924e28f5e66529f5c0c63010b93c,Namespace:kube-system,Attempt:0,} returns sandbox id \"680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245\"" Dec 16 13:10:34.004189 containerd[2582]: time="2025-12-16T13:10:34.004166968Z" level=info msg="StartContainer for \"89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684\"" Dec 16 13:10:34.004454 containerd[2582]: time="2025-12-16T13:10:34.004175289Z" level=info msg="Container 6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:34.005006 containerd[2582]: time="2025-12-16T13:10:34.004985920Z" level=info msg="connecting to shim 89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684" address="unix:///run/containerd/s/4296a12b97415c5575979f84c90fb4bef6e8db2b4d56c0b3f281db6d16dfdab4" protocol=ttrpc version=3 Dec 16 13:10:34.006689 containerd[2582]: time="2025-12-16T13:10:34.006645838Z" level=info msg="CreateContainer within sandbox \"680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:10:34.017759 containerd[2582]: time="2025-12-16T13:10:34.017736295Z" level=info msg="CreateContainer within sandbox \"c893d8b1dfa2e4ff5c00282bfb91a37dcd9c302949e656137385df37c89f9663\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b\"" Dec 16 13:10:34.018104 containerd[2582]: time="2025-12-16T13:10:34.018073056Z" level=info msg="StartContainer for \"6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b\"" Dec 16 13:10:34.018926 containerd[2582]: time="2025-12-16T13:10:34.018860769Z" level=info msg="connecting to shim 6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b" address="unix:///run/containerd/s/8e6fed248fc5001e4c5e7627e2380c27e8ca8238c554ad494ab2a1659d75965b" protocol=ttrpc version=3 Dec 16 13:10:34.020097 systemd[1]: Started cri-containerd-89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684.scope - libcontainer container 89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684. Dec 16 13:10:34.032240 containerd[2582]: time="2025-12-16T13:10:34.032178633Z" level=info msg="Container 956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:34.040854 systemd[1]: Started cri-containerd-6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b.scope - libcontainer container 6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b. Dec 16 13:10:34.042000 audit: BPF prog-id=122 op=LOAD Dec 16 13:10:34.044000 audit: BPF prog-id=123 op=LOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=123 op=UNLOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=124 op=LOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=125 op=LOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=125 op=UNLOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=124 op=UNLOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.044000 audit: BPF prog-id=126 op=LOAD Dec 16 13:10:34.044000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3642 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839613664616536643136316634356361613561353236316338616166 Dec 16 13:10:34.048824 containerd[2582]: time="2025-12-16T13:10:34.048793005Z" level=info msg="CreateContainer within sandbox \"680913b606ee696a1fe9c38d29fa31ccefdda401349c0aa906a60757ad7fa245\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6\"" Dec 16 13:10:34.057643 containerd[2582]: time="2025-12-16T13:10:34.057600575Z" level=info msg="StartContainer for \"956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6\"" Dec 16 13:10:34.058000 audit: BPF prog-id=127 op=LOAD Dec 16 13:10:34.059770 containerd[2582]: time="2025-12-16T13:10:34.059732659Z" level=info msg="connecting to shim 956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6" address="unix:///run/containerd/s/4834d2cd1f19e9f6eaa00046605fa1a036e3e38ac63137bc3da93043b78f310f" protocol=ttrpc version=3 Dec 16 13:10:34.059000 audit: BPF prog-id=128 op=LOAD Dec 16 13:10:34.059000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=128 op=UNLOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=129 op=LOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=130 op=LOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=130 op=UNLOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=129 op=UNLOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.060000 audit: BPF prog-id=131 op=LOAD Dec 16 13:10:34.060000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3706 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643964356432376664646234633439613564336136613461666538 Dec 16 13:10:34.068941 kubelet[3601]: I1216 13:10:34.068913 3601 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:34.069150 kubelet[3601]: E1216 13:10:34.069127 3601 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:34.086254 systemd[1]: Started cri-containerd-956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6.scope - libcontainer container 956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6. Dec 16 13:10:34.099000 audit: BPF prog-id=132 op=LOAD Dec 16 13:10:34.099000 audit: BPF prog-id=133 op=LOAD Dec 16 13:10:34.099000 audit[3810]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.099000 audit: BPF prog-id=133 op=UNLOAD Dec 16 13:10:34.099000 audit[3810]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.100000 audit: BPF prog-id=134 op=LOAD Dec 16 13:10:34.100000 audit[3810]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.100000 audit: BPF prog-id=135 op=LOAD Dec 16 13:10:34.100000 audit[3810]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.100000 audit: BPF prog-id=135 op=UNLOAD Dec 16 13:10:34.100000 audit[3810]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.100000 audit: BPF prog-id=134 op=UNLOAD Dec 16 13:10:34.100000 audit[3810]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.100000 audit: BPF prog-id=136 op=LOAD Dec 16 13:10:34.100000 audit[3810]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3674 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935366337623231353034666531306664623634376664353866643234 Dec 16 13:10:34.115465 containerd[2582]: time="2025-12-16T13:10:34.115355883Z" level=info msg="StartContainer for \"89a6dae6d161f45caa5a5261c8aaf4009c7d35820cfeeea9b8985d6bf0537684\" returns successfully" Dec 16 13:10:34.129337 containerd[2582]: time="2025-12-16T13:10:34.129293934Z" level=info msg="StartContainer for \"6ed9d5d27fddb4c49a5d3a6a4afe86d4f487d40bec84752f28beefed5d46464b\" returns successfully" Dec 16 13:10:34.157433 containerd[2582]: time="2025-12-16T13:10:34.157410651Z" level=info msg="StartContainer for \"956c7b21504fe10fdb647fd58fd24ff95b4b2281c8e88168c88659aa91b8e0e6\" returns successfully" Dec 16 13:10:34.365188 kubelet[3601]: E1216 13:10:34.365168 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:34.368435 kubelet[3601]: E1216 13:10:34.368384 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:34.371082 kubelet[3601]: E1216 13:10:34.371066 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:34.871546 kubelet[3601]: I1216 13:10:34.871532 3601 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:35.372574 kubelet[3601]: E1216 13:10:35.372067 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:35.372574 kubelet[3601]: E1216 13:10:35.372318 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:35.372574 kubelet[3601]: E1216 13:10:35.372485 3601 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:35.598720 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:10:35.901000 kubelet[3601]: E1216 13:10:35.900970 3601 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-e647365c22\" not found" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.019437 kubelet[3601]: I1216 13:10:36.019322 3601 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.019437 kubelet[3601]: E1216 13:10:36.019346 3601 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-a-e647365c22\": node \"ci-4547.0.0-a-e647365c22\" not found" Dec 16 13:10:36.101314 kubelet[3601]: I1216 13:10:36.101100 3601 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.105642 kubelet[3601]: E1216 13:10:36.105624 3601 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-e647365c22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.105758 kubelet[3601]: I1216 13:10:36.105750 3601 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.107158 kubelet[3601]: E1216 13:10:36.107101 3601 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-e647365c22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.107158 kubelet[3601]: I1216 13:10:36.107120 3601 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.109743 kubelet[3601]: E1216 13:10:36.109725 3601 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.290950 kubelet[3601]: I1216 13:10:36.290870 3601 apiserver.go:52] "Watching apiserver" Dec 16 13:10:36.300136 kubelet[3601]: I1216 13:10:36.300113 3601 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:10:36.373862 kubelet[3601]: I1216 13:10:36.373812 3601 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:36.375351 kubelet[3601]: E1216 13:10:36.375309 3601 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-e647365c22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:37.938260 kubelet[3601]: I1216 13:10:37.938229 3601 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:37.945723 kubelet[3601]: W1216 13:10:37.945667 3601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:38.068327 systemd[1]: Reload requested from client PID 3869 ('systemctl') (unit session-10.scope)... Dec 16 13:10:38.068343 systemd[1]: Reloading... Dec 16 13:10:38.143783 zram_generator::config[3919]: No configuration found. Dec 16 13:10:38.314183 systemd[1]: Reloading finished in 245 ms. Dec 16 13:10:38.343593 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:38.362014 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:10:38.362219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:38.362260 systemd[1]: kubelet.service: Consumed 496ms CPU time, 131M memory peak. Dec 16 13:10:38.366866 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 13:10:38.366926 kernel: audit: type=1131 audit(1765890638.361:421): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:38.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:38.366663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:10:38.370249 kernel: audit: type=1334 audit(1765890638.366:422): prog-id=137 op=LOAD Dec 16 13:10:38.370305 kernel: audit: type=1334 audit(1765890638.366:423): prog-id=101 op=UNLOAD Dec 16 13:10:38.366000 audit: BPF prog-id=137 op=LOAD Dec 16 13:10:38.366000 audit: BPF prog-id=101 op=UNLOAD Dec 16 13:10:38.375198 kernel: audit: type=1334 audit(1765890638.367:424): prog-id=138 op=LOAD Dec 16 13:10:38.375251 kernel: audit: type=1334 audit(1765890638.367:425): prog-id=139 op=LOAD Dec 16 13:10:38.367000 audit: BPF prog-id=138 op=LOAD Dec 16 13:10:38.367000 audit: BPF prog-id=139 op=LOAD Dec 16 13:10:38.367000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:10:38.367000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:10:38.378114 kernel: audit: type=1334 audit(1765890638.367:426): prog-id=102 op=UNLOAD Dec 16 13:10:38.378144 kernel: audit: type=1334 audit(1765890638.367:427): prog-id=103 op=UNLOAD Dec 16 13:10:38.369000 audit: BPF prog-id=140 op=LOAD Dec 16 13:10:38.378959 kernel: audit: type=1334 audit(1765890638.369:428): prog-id=140 op=LOAD Dec 16 13:10:38.369000 audit: BPF prog-id=104 op=UNLOAD Dec 16 13:10:38.379815 kernel: audit: type=1334 audit(1765890638.369:429): prog-id=104 op=UNLOAD Dec 16 13:10:38.369000 audit: BPF prog-id=141 op=LOAD Dec 16 13:10:38.380772 kernel: audit: type=1334 audit(1765890638.369:430): prog-id=141 op=LOAD Dec 16 13:10:38.369000 audit: BPF prog-id=142 op=LOAD Dec 16 13:10:38.369000 audit: BPF prog-id=105 op=UNLOAD Dec 16 13:10:38.369000 audit: BPF prog-id=106 op=UNLOAD Dec 16 13:10:38.370000 audit: BPF prog-id=143 op=LOAD Dec 16 13:10:38.370000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:10:38.371000 audit: BPF prog-id=144 op=LOAD Dec 16 13:10:38.371000 audit: BPF prog-id=145 op=LOAD Dec 16 13:10:38.371000 audit: BPF prog-id=91 op=UNLOAD Dec 16 13:10:38.371000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:10:38.372000 audit: BPF prog-id=146 op=LOAD Dec 16 13:10:38.372000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:10:38.372000 audit: BPF prog-id=147 op=LOAD Dec 16 13:10:38.372000 audit: BPF prog-id=148 op=LOAD Dec 16 13:10:38.372000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:10:38.372000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:10:38.374000 audit: BPF prog-id=149 op=LOAD Dec 16 13:10:38.374000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:10:38.375000 audit: BPF prog-id=150 op=LOAD Dec 16 13:10:38.375000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:10:38.376000 audit: BPF prog-id=151 op=LOAD Dec 16 13:10:38.376000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:10:38.376000 audit: BPF prog-id=152 op=LOAD Dec 16 13:10:38.376000 audit: BPF prog-id=153 op=LOAD Dec 16 13:10:38.376000 audit: BPF prog-id=95 op=UNLOAD Dec 16 13:10:38.376000 audit: BPF prog-id=96 op=UNLOAD Dec 16 13:10:38.382000 audit: BPF prog-id=154 op=LOAD Dec 16 13:10:38.382000 audit: BPF prog-id=155 op=LOAD Dec 16 13:10:38.382000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:10:38.382000 audit: BPF prog-id=100 op=UNLOAD Dec 16 13:10:38.383000 audit: BPF prog-id=156 op=LOAD Dec 16 13:10:38.383000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:10:38.873618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:10:38.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:38.880940 (kubelet)[3986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:10:38.918065 kubelet[3986]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:10:38.918065 kubelet[3986]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:10:38.918065 kubelet[3986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:10:38.918298 kubelet[3986]: I1216 13:10:38.918111 3986 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:10:38.924713 kubelet[3986]: I1216 13:10:38.924671 3986 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:10:38.925819 kubelet[3986]: I1216 13:10:38.924734 3986 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:10:38.925819 kubelet[3986]: I1216 13:10:38.925080 3986 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:10:38.927803 kubelet[3986]: I1216 13:10:38.927783 3986 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:10:38.929445 kubelet[3986]: I1216 13:10:38.929429 3986 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:10:38.932107 kubelet[3986]: I1216 13:10:38.932092 3986 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:10:38.933871 kubelet[3986]: I1216 13:10:38.933850 3986 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:10:38.934004 kubelet[3986]: I1216 13:10:38.933981 3986 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:10:38.934124 kubelet[3986]: I1216 13:10:38.934002 3986 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-e647365c22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:10:38.934212 kubelet[3986]: I1216 13:10:38.934131 3986 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:10:38.934212 kubelet[3986]: I1216 13:10:38.934140 3986 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:10:38.934212 kubelet[3986]: I1216 13:10:38.934178 3986 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:10:38.934283 kubelet[3986]: I1216 13:10:38.934277 3986 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:10:38.934304 kubelet[3986]: I1216 13:10:38.934295 3986 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:10:38.934323 kubelet[3986]: I1216 13:10:38.934316 3986 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:10:38.934346 kubelet[3986]: I1216 13:10:38.934324 3986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:10:38.937718 kubelet[3986]: I1216 13:10:38.935541 3986 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:10:38.937718 kubelet[3986]: I1216 13:10:38.937670 3986 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:10:38.938043 kubelet[3986]: I1216 13:10:38.938028 3986 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:10:38.938078 kubelet[3986]: I1216 13:10:38.938057 3986 server.go:1287] "Started kubelet" Dec 16 13:10:38.940869 kubelet[3986]: I1216 13:10:38.940851 3986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:10:38.945387 kubelet[3986]: I1216 13:10:38.944725 3986 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:10:38.949737 kubelet[3986]: I1216 13:10:38.948857 3986 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:10:38.949737 kubelet[3986]: I1216 13:10:38.949651 3986 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:10:38.950985 kubelet[3986]: I1216 13:10:38.950934 3986 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:10:38.954896 kubelet[3986]: I1216 13:10:38.954874 3986 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:10:38.964064 kubelet[3986]: I1216 13:10:38.964045 3986 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:10:38.964204 kubelet[3986]: E1216 13:10:38.964189 3986 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-e647365c22\" not found" Dec 16 13:10:38.966170 kubelet[3986]: I1216 13:10:38.966123 3986 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:10:38.966227 kubelet[3986]: I1216 13:10:38.966218 3986 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:10:38.971396 kubelet[3986]: I1216 13:10:38.970899 3986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:10:38.973319 kubelet[3986]: I1216 13:10:38.973298 3986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:10:38.973382 kubelet[3986]: I1216 13:10:38.973326 3986 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:10:38.973382 kubelet[3986]: I1216 13:10:38.973339 3986 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:10:38.973382 kubelet[3986]: I1216 13:10:38.973346 3986 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:10:38.973444 kubelet[3986]: E1216 13:10:38.973377 3986 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:10:38.976214 kubelet[3986]: I1216 13:10:38.974925 3986 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:10:38.976214 kubelet[3986]: I1216 13:10:38.975009 3986 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:10:38.980268 kubelet[3986]: E1216 13:10:38.980245 3986 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:10:38.980331 kubelet[3986]: I1216 13:10:38.980326 3986 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033737 3986 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033751 3986 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033766 3986 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033890 3986 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033897 3986 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033911 3986 policy_none.go:49] "None policy: Start" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033918 3986 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:10:39.033949 kubelet[3986]: I1216 13:10:39.033925 3986 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:10:39.034146 kubelet[3986]: I1216 13:10:39.034003 3986 state_mem.go:75] "Updated machine memory state" Dec 16 13:10:39.039265 kubelet[3986]: I1216 13:10:39.039252 3986 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:10:39.039431 kubelet[3986]: I1216 13:10:39.039424 3986 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:10:39.039484 kubelet[3986]: I1216 13:10:39.039467 3986 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:10:39.039925 kubelet[3986]: I1216 13:10:39.039915 3986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:10:39.043377 kubelet[3986]: E1216 13:10:39.042660 3986 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:10:39.074379 kubelet[3986]: I1216 13:10:39.074357 3986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.074950 kubelet[3986]: I1216 13:10:39.074536 3986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.075177 kubelet[3986]: I1216 13:10:39.074965 3986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.083874 kubelet[3986]: W1216 13:10:39.083695 3986 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:39.087839 kubelet[3986]: W1216 13:10:39.087688 3986 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:39.088035 kubelet[3986]: W1216 13:10:39.088028 3986 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:39.088151 kubelet[3986]: E1216 13:10:39.088092 3986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-e647365c22\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.141820 kubelet[3986]: I1216 13:10:39.141765 3986 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.152577 kubelet[3986]: I1216 13:10:39.152563 3986 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.152661 kubelet[3986]: I1216 13:10:39.152608 3986 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168077 kubelet[3986]: I1216 13:10:39.168024 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/324258d31b2079042d231a40dd991443-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-e647365c22\" (UID: \"324258d31b2079042d231a40dd991443\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168223 kubelet[3986]: I1216 13:10:39.168149 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168223 kubelet[3986]: I1216 13:10:39.168165 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168223 kubelet[3986]: I1216 13:10:39.168179 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b31f48cb56488f5c70ccc41753afb75e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-e647365c22\" (UID: \"b31f48cb56488f5c70ccc41753afb75e\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168380 kubelet[3986]: I1216 13:10:39.168327 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168380 kubelet[3986]: I1216 13:10:39.168342 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168380 kubelet[3986]: I1216 13:10:39.168354 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168380 kubelet[3986]: I1216 13:10:39.168367 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.168519 kubelet[3986]: I1216 13:10:39.168499 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85ec924e28f5e66529f5c0c63010b93c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-e647365c22\" (UID: \"85ec924e28f5e66529f5c0c63010b93c\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" Dec 16 13:10:39.935529 kubelet[3986]: I1216 13:10:39.935509 3986 apiserver.go:52] "Watching apiserver" Dec 16 13:10:39.966406 kubelet[3986]: I1216 13:10:39.966386 3986 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:10:40.014897 kubelet[3986]: I1216 13:10:40.014882 3986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:40.015579 kubelet[3986]: I1216 13:10:40.015495 3986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:40.027319 kubelet[3986]: W1216 13:10:40.027258 3986 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:40.027319 kubelet[3986]: W1216 13:10:40.027256 3986 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:10:40.027319 kubelet[3986]: E1216 13:10:40.027305 3986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-e647365c22\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" Dec 16 13:10:40.027421 kubelet[3986]: E1216 13:10:40.027305 3986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-e647365c22\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" Dec 16 13:10:40.035945 kubelet[3986]: I1216 13:10:40.035887 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-e647365c22" podStartSLOduration=1.035878633 podStartE2EDuration="1.035878633s" podCreationTimestamp="2025-12-16 13:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:40.035223059 +0000 UTC m=+1.150764996" watchObservedRunningTime="2025-12-16 13:10:40.035878633 +0000 UTC m=+1.151420552" Dec 16 13:10:40.056188 kubelet[3986]: I1216 13:10:40.056158 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-e647365c22" podStartSLOduration=1.056145122 podStartE2EDuration="1.056145122s" podCreationTimestamp="2025-12-16 13:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:40.043241887 +0000 UTC m=+1.158783829" watchObservedRunningTime="2025-12-16 13:10:40.056145122 +0000 UTC m=+1.171687043" Dec 16 13:10:40.066532 kubelet[3986]: I1216 13:10:40.066498 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-e647365c22" podStartSLOduration=3.066489016 podStartE2EDuration="3.066489016s" podCreationTimestamp="2025-12-16 13:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:40.056507397 +0000 UTC m=+1.172049319" watchObservedRunningTime="2025-12-16 13:10:40.066489016 +0000 UTC m=+1.182030933" Dec 16 13:10:43.490229 update_engine[2541]: I20251216 13:10:43.490181 2541 update_attempter.cc:509] Updating boot flags... Dec 16 13:10:45.279627 kubelet[3986]: I1216 13:10:45.279601 3986 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:10:45.280430 containerd[2582]: time="2025-12-16T13:10:45.280397978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:10:45.280880 kubelet[3986]: I1216 13:10:45.280865 3986 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:10:46.196738 systemd[1]: Created slice kubepods-besteffort-poda49330c6_8554_4ec7_9f8b_4db111ac9ced.slice - libcontainer container kubepods-besteffort-poda49330c6_8554_4ec7_9f8b_4db111ac9ced.slice. Dec 16 13:10:46.213282 kubelet[3986]: I1216 13:10:46.213182 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a49330c6-8554-4ec7-9f8b-4db111ac9ced-kube-proxy\") pod \"kube-proxy-k8696\" (UID: \"a49330c6-8554-4ec7-9f8b-4db111ac9ced\") " pod="kube-system/kube-proxy-k8696" Dec 16 13:10:46.213282 kubelet[3986]: I1216 13:10:46.213210 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a49330c6-8554-4ec7-9f8b-4db111ac9ced-xtables-lock\") pod \"kube-proxy-k8696\" (UID: \"a49330c6-8554-4ec7-9f8b-4db111ac9ced\") " pod="kube-system/kube-proxy-k8696" Dec 16 13:10:46.213282 kubelet[3986]: I1216 13:10:46.213236 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a49330c6-8554-4ec7-9f8b-4db111ac9ced-lib-modules\") pod \"kube-proxy-k8696\" (UID: \"a49330c6-8554-4ec7-9f8b-4db111ac9ced\") " pod="kube-system/kube-proxy-k8696" Dec 16 13:10:46.213282 kubelet[3986]: I1216 13:10:46.213249 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslpw\" (UniqueName: \"kubernetes.io/projected/a49330c6-8554-4ec7-9f8b-4db111ac9ced-kube-api-access-xslpw\") pod \"kube-proxy-k8696\" (UID: \"a49330c6-8554-4ec7-9f8b-4db111ac9ced\") " pod="kube-system/kube-proxy-k8696" Dec 16 13:10:46.369335 kubelet[3986]: I1216 13:10:46.369304 3986 status_manager.go:890] "Failed to get status for pod" podUID="24bb0a98-4eaf-415e-be7d-ca76baac433b" pod="tigera-operator/tigera-operator-7dcd859c48-l7d2m" err="pods \"tigera-operator-7dcd859c48-l7d2m\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" Dec 16 13:10:46.369820 kubelet[3986]: W1216 13:10:46.369379 3986 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4547.0.0-a-e647365c22" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object Dec 16 13:10:46.369820 kubelet[3986]: E1216 13:10:46.369403 3986 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" logger="UnhandledError" Dec 16 13:10:46.370266 kubelet[3986]: W1216 13:10:46.370247 3986 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4547.0.0-a-e647365c22" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object Dec 16 13:10:46.370337 kubelet[3986]: E1216 13:10:46.370277 3986 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" logger="UnhandledError" Dec 16 13:10:46.372472 systemd[1]: Created slice kubepods-besteffort-pod24bb0a98_4eaf_415e_be7d_ca76baac433b.slice - libcontainer container kubepods-besteffort-pod24bb0a98_4eaf_415e_be7d_ca76baac433b.slice. Dec 16 13:10:46.413981 kubelet[3986]: I1216 13:10:46.413962 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25xs\" (UniqueName: \"kubernetes.io/projected/24bb0a98-4eaf-415e-be7d-ca76baac433b-kube-api-access-x25xs\") pod \"tigera-operator-7dcd859c48-l7d2m\" (UID: \"24bb0a98-4eaf-415e-be7d-ca76baac433b\") " pod="tigera-operator/tigera-operator-7dcd859c48-l7d2m" Dec 16 13:10:46.414051 kubelet[3986]: I1216 13:10:46.413988 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/24bb0a98-4eaf-415e-be7d-ca76baac433b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-l7d2m\" (UID: \"24bb0a98-4eaf-415e-be7d-ca76baac433b\") " pod="tigera-operator/tigera-operator-7dcd859c48-l7d2m" Dec 16 13:10:46.505074 containerd[2582]: time="2025-12-16T13:10:46.504986207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8696,Uid:a49330c6-8554-4ec7-9f8b-4db111ac9ced,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:46.550738 containerd[2582]: time="2025-12-16T13:10:46.550609662Z" level=info msg="connecting to shim ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433" address="unix:///run/containerd/s/d8997a7e27d374259480cb32715e1ed4e3cf810bc7a72a602dc6062d204a3bfb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:46.572892 systemd[1]: Started cri-containerd-ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433.scope - libcontainer container ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433. Dec 16 13:10:46.578000 audit: BPF prog-id=157 op=LOAD Dec 16 13:10:46.582647 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:10:46.582735 kernel: audit: type=1334 audit(1765890646.578:463): prog-id=157 op=LOAD Dec 16 13:10:46.579000 audit: BPF prog-id=158 op=LOAD Dec 16 13:10:46.587740 kernel: audit: type=1334 audit(1765890646.579:464): prog-id=158 op=LOAD Dec 16 13:10:46.587795 kernel: audit: type=1300 audit(1765890646.579:464): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.587817 kernel: audit: type=1327 audit(1765890646.579:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: BPF prog-id=158 op=UNLOAD Dec 16 13:10:46.593727 kernel: audit: type=1334 audit(1765890646.579:465): prog-id=158 op=UNLOAD Dec 16 13:10:46.593777 kernel: audit: type=1300 audit(1765890646.579:465): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.603446 kernel: audit: type=1327 audit(1765890646.579:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.603522 kernel: audit: type=1334 audit(1765890646.579:466): prog-id=159 op=LOAD Dec 16 13:10:46.579000 audit: BPF prog-id=159 op=LOAD Dec 16 13:10:46.607716 kernel: audit: type=1300 audit(1765890646.579:466): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.611944 kernel: audit: type=1327 audit(1765890646.579:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: BPF prog-id=160 op=LOAD Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.579000 audit: BPF prog-id=161 op=LOAD Dec 16 13:10:46.579000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4070 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303762333535343463383063316166643730623565626334663766 Dec 16 13:10:46.617322 containerd[2582]: time="2025-12-16T13:10:46.617298871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8696,Uid:a49330c6-8554-4ec7-9f8b-4db111ac9ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433\"" Dec 16 13:10:46.619730 containerd[2582]: time="2025-12-16T13:10:46.619576694Z" level=info msg="CreateContainer within sandbox \"ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:10:46.639943 containerd[2582]: time="2025-12-16T13:10:46.637217048Z" level=info msg="Container 8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:46.653820 containerd[2582]: time="2025-12-16T13:10:46.653798550Z" level=info msg="CreateContainer within sandbox \"ee07b35544c80c1afd70b5ebc4f7f0b67d6f9ed236d378ed04da05daacd57433\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd\"" Dec 16 13:10:46.654367 containerd[2582]: time="2025-12-16T13:10:46.654344717Z" level=info msg="StartContainer for \"8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd\"" Dec 16 13:10:46.655600 containerd[2582]: time="2025-12-16T13:10:46.655551018Z" level=info msg="connecting to shim 8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd" address="unix:///run/containerd/s/d8997a7e27d374259480cb32715e1ed4e3cf810bc7a72a602dc6062d204a3bfb" protocol=ttrpc version=3 Dec 16 13:10:46.672848 systemd[1]: Started cri-containerd-8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd.scope - libcontainer container 8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd. Dec 16 13:10:46.709000 audit: BPF prog-id=162 op=LOAD Dec 16 13:10:46.709000 audit[4108]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4070 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864643036396332343761306530613231643035646533313433646230 Dec 16 13:10:46.709000 audit: BPF prog-id=163 op=LOAD Dec 16 13:10:46.709000 audit[4108]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4070 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864643036396332343761306530613231643035646533313433646230 Dec 16 13:10:46.709000 audit: BPF prog-id=163 op=UNLOAD Dec 16 13:10:46.709000 audit[4108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864643036396332343761306530613231643035646533313433646230 Dec 16 13:10:46.709000 audit: BPF prog-id=162 op=UNLOAD Dec 16 13:10:46.709000 audit[4108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4070 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864643036396332343761306530613231643035646533313433646230 Dec 16 13:10:46.709000 audit: BPF prog-id=164 op=LOAD Dec 16 13:10:46.709000 audit[4108]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4070 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864643036396332343761306530613231643035646533313433646230 Dec 16 13:10:46.728573 containerd[2582]: time="2025-12-16T13:10:46.728550572Z" level=info msg="StartContainer for \"8dd069c247a0e0a21d05de3143db01ba9620a3df60d7bb4f706d5be80c912fcd\" returns successfully" Dec 16 13:10:46.805000 audit[4170]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:46.805000 audit[4170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc926cef20 a2=0 a3=7ffc926cef0c items=0 ppid=4121 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:10:46.807000 audit[4172]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=4172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:46.807000 audit[4172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed9fdaae0 a2=0 a3=7ffed9fdaacc items=0 ppid=4121 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:10:46.809000 audit[4173]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.809000 audit[4173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeefe9b010 a2=0 a3=7ffeefe9affc items=0 ppid=4121 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:10:46.810000 audit[4174]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=4174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:46.810000 audit[4174]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb4f08fe0 a2=0 a3=7ffcb4f08fcc items=0 ppid=4121 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:10:46.811000 audit[4175]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.811000 audit[4175]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff085b19a0 a2=0 a3=7fff085b198c items=0 ppid=4121 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:10:46.812000 audit[4176]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.812000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeff9c8ee0 a2=0 a3=7ffeff9c8ecc items=0 ppid=4121 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:10:46.905000 audit[4177]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.905000 audit[4177]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe6a3b2a0 a2=0 a3=7fffe6a3b28c items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:10:46.907000 audit[4179]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.907000 audit[4179]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd612e7f20 a2=0 a3=7ffd612e7f0c items=0 ppid=4121 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 13:10:46.910000 audit[4182]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.910000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe6a110160 a2=0 a3=7ffe6a11014c items=0 ppid=4121 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 13:10:46.911000 audit[4183]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.911000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd545d33f0 a2=0 a3=7ffd545d33dc items=0 ppid=4121 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:10:46.913000 audit[4185]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.913000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd406346f0 a2=0 a3=7ffd406346dc items=0 ppid=4121 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:10:46.914000 audit[4186]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.914000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3281b820 a2=0 a3=7fff3281b80c items=0 ppid=4121 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:10:46.916000 audit[4188]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.916000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff5e7f0590 a2=0 a3=7fff5e7f057c items=0 ppid=4121 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:10:46.919000 audit[4191]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.919000 audit[4191]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe3c65360 a2=0 a3=7fffe3c6534c items=0 ppid=4121 pid=4191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.919000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 13:10:46.920000 audit[4192]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.920000 audit[4192]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe666baf00 a2=0 a3=7ffe666baeec items=0 ppid=4121 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:10:46.922000 audit[4194]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.922000 audit[4194]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc8c93bc0 a2=0 a3=7ffdc8c93bac items=0 ppid=4121 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:10:46.923000 audit[4195]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.923000 audit[4195]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffed795d90 a2=0 a3=7fffed795d7c items=0 ppid=4121 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:10:46.925000 audit[4197]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.925000 audit[4197]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee9b63470 a2=0 a3=7ffee9b6345c items=0 ppid=4121 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:10:46.928000 audit[4200]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.928000 audit[4200]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1e924820 a2=0 a3=7ffd1e92480c items=0 ppid=4121 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:10:46.931000 audit[4203]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.931000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe38e84980 a2=0 a3=7ffe38e8496c items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:10:46.931000 audit[4204]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.931000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff11546100 a2=0 a3=7fff115460ec items=0 ppid=4121 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:10:46.933000 audit[4206]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.933000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff2e192e10 a2=0 a3=7fff2e192dfc items=0 ppid=4121 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:10:46.936000 audit[4209]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.936000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc33b246a0 a2=0 a3=7ffc33b2468c items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:10:46.937000 audit[4210]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.937000 audit[4210]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe58ac4450 a2=0 a3=7ffe58ac443c items=0 ppid=4121 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:10:46.939000 audit[4212]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:10:46.939000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe6ea9b250 a2=0 a3=7ffe6ea9b23c items=0 ppid=4121 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:46.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:10:47.049932 kubelet[3986]: I1216 13:10:47.049871 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8696" podStartSLOduration=1.049855944 podStartE2EDuration="1.049855944s" podCreationTimestamp="2025-12-16 13:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:47.03822473 +0000 UTC m=+8.153766650" watchObservedRunningTime="2025-12-16 13:10:47.049855944 +0000 UTC m=+8.165397865" Dec 16 13:10:47.083000 audit[4218]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:47.083000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffae6050b0 a2=0 a3=7fffae60509c items=0 ppid=4121 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:47.124000 audit[4218]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:47.124000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffae6050b0 a2=0 a3=7fffae60509c items=0 ppid=4121 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:47.126000 audit[4223]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.126000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd7831fcc0 a2=0 a3=7ffd7831fcac items=0 ppid=4121 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:10:47.128000 audit[4225]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.128000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe3f319350 a2=0 a3=7ffe3f31933c items=0 ppid=4121 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 13:10:47.131000 audit[4228]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.131000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc2f809140 a2=0 a3=7ffc2f80912c items=0 ppid=4121 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 13:10:47.132000 audit[4229]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.132000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddddc7f00 a2=0 a3=7ffddddc7eec items=0 ppid=4121 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:10:47.134000 audit[4231]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.134000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd840ea700 a2=0 a3=7ffd840ea6ec items=0 ppid=4121 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:10:47.135000 audit[4232]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.135000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca37aad00 a2=0 a3=7ffca37aacec items=0 ppid=4121 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:10:47.137000 audit[4234]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.137000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe207fc120 a2=0 a3=7ffe207fc10c items=0 ppid=4121 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 13:10:47.140000 audit[4237]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.140000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe0cbe43d0 a2=0 a3=7ffe0cbe43bc items=0 ppid=4121 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:10:47.141000 audit[4238]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.141000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc917d0e90 a2=0 a3=7ffc917d0e7c items=0 ppid=4121 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:10:47.143000 audit[4240]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.143000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff101e56a0 a2=0 a3=7fff101e568c items=0 ppid=4121 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:10:47.144000 audit[4241]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.144000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffedd760d0 a2=0 a3=7fffedd760bc items=0 ppid=4121 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:10:47.146000 audit[4243]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.146000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff762ad2d0 a2=0 a3=7fff762ad2bc items=0 ppid=4121 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:10:47.148000 audit[4246]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.148000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffaeb9f9b0 a2=0 a3=7fffaeb9f99c items=0 ppid=4121 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:10:47.151000 audit[4249]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.151000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc7ce4810 a2=0 a3=7ffcc7ce47fc items=0 ppid=4121 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 13:10:47.152000 audit[4250]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.152000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfa6a0bb0 a2=0 a3=7ffcfa6a0b9c items=0 ppid=4121 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:10:47.154000 audit[4252]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.154000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff1581c970 a2=0 a3=7fff1581c95c items=0 ppid=4121 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:10:47.157000 audit[4255]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.157000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6e3339f0 a2=0 a3=7ffc6e3339dc items=0 ppid=4121 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:10:47.158000 audit[4256]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.158000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd54e19ec0 a2=0 a3=7ffd54e19eac items=0 ppid=4121 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:10:47.160000 audit[4258]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.160000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeb870e250 a2=0 a3=7ffeb870e23c items=0 ppid=4121 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:10:47.161000 audit[4259]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.161000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd47243f00 a2=0 a3=7ffd47243eec items=0 ppid=4121 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:10:47.163000 audit[4261]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.163000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb035c1f0 a2=0 a3=7ffcb035c1dc items=0 ppid=4121 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:10:47.166000 audit[4264]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:10:47.166000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd21120a80 a2=0 a3=7ffd21120a6c items=0 ppid=4121 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:10:47.168000 audit[4266]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:10:47.168000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffec787d890 a2=0 a3=7ffec787d87c items=0 ppid=4121 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.168000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:47.168000 audit[4266]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:10:47.168000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffec787d890 a2=0 a3=7ffec787d87c items=0 ppid=4121 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.168000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:47.575857 containerd[2582]: time="2025-12-16T13:10:47.575829857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l7d2m,Uid:24bb0a98-4eaf-415e-be7d-ca76baac433b,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:10:47.617518 containerd[2582]: time="2025-12-16T13:10:47.617050925Z" level=info msg="connecting to shim a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565" address="unix:///run/containerd/s/d3a868d9c63d9f03b9e52deb18aafa3f98764b8600f61b707aa4436145bb2da6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:47.635883 systemd[1]: Started cri-containerd-a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565.scope - libcontainer container a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565. Dec 16 13:10:47.641000 audit: BPF prog-id=165 op=LOAD Dec 16 13:10:47.642000 audit: BPF prog-id=166 op=LOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=167 op=LOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=168 op=LOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=168 op=UNLOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=167 op=UNLOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.642000 audit: BPF prog-id=169 op=LOAD Dec 16 13:10:47.642000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4277 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:47.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130356436363364386266333530636138363732653431663133663466 Dec 16 13:10:47.670422 containerd[2582]: time="2025-12-16T13:10:47.670399034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l7d2m,Uid:24bb0a98-4eaf-415e-be7d-ca76baac433b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565\"" Dec 16 13:10:47.671839 containerd[2582]: time="2025-12-16T13:10:47.671818855Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:10:49.053652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472697183.mount: Deactivated successfully. Dec 16 13:10:49.433632 containerd[2582]: time="2025-12-16T13:10:49.433601253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:49.438812 containerd[2582]: time="2025-12-16T13:10:49.438791471Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Dec 16 13:10:49.441988 containerd[2582]: time="2025-12-16T13:10:49.441950142Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:49.445220 containerd[2582]: time="2025-12-16T13:10:49.445118722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:10:49.446252 containerd[2582]: time="2025-12-16T13:10:49.445601178Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.773752852s" Dec 16 13:10:49.446252 containerd[2582]: time="2025-12-16T13:10:49.445628091Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:10:49.448492 containerd[2582]: time="2025-12-16T13:10:49.448469267Z" level=info msg="CreateContainer within sandbox \"a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:10:49.471247 containerd[2582]: time="2025-12-16T13:10:49.471219851Z" level=info msg="Container 0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:49.473252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149250008.mount: Deactivated successfully. Dec 16 13:10:49.486602 containerd[2582]: time="2025-12-16T13:10:49.486578162Z" level=info msg="CreateContainer within sandbox \"a05d663d8bf350ca8672e41f13f4fe6d0ded540fb38da5893a43185185ac8565\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c\"" Dec 16 13:10:49.487020 containerd[2582]: time="2025-12-16T13:10:49.487000846Z" level=info msg="StartContainer for \"0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c\"" Dec 16 13:10:49.487823 containerd[2582]: time="2025-12-16T13:10:49.487785307Z" level=info msg="connecting to shim 0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c" address="unix:///run/containerd/s/d3a868d9c63d9f03b9e52deb18aafa3f98764b8600f61b707aa4436145bb2da6" protocol=ttrpc version=3 Dec 16 13:10:49.504858 systemd[1]: Started cri-containerd-0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c.scope - libcontainer container 0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c. Dec 16 13:10:49.510000 audit: BPF prog-id=170 op=LOAD Dec 16 13:10:49.511000 audit: BPF prog-id=171 op=LOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=171 op=UNLOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=172 op=LOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=173 op=LOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=173 op=UNLOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=172 op=UNLOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.511000 audit: BPF prog-id=174 op=LOAD Dec 16 13:10:49.511000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4277 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:49.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065646630623663626563656539353830333936356665323661633833 Dec 16 13:10:49.528456 containerd[2582]: time="2025-12-16T13:10:49.528295919Z" level=info msg="StartContainer for \"0edf0b6cbecee95803965fe26ac83436c27911ab088440bc4f54a0ee56add22c\" returns successfully" Dec 16 13:10:50.498425 kubelet[3986]: I1216 13:10:50.498155 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-l7d2m" podStartSLOduration=2.723070211 podStartE2EDuration="4.498139147s" podCreationTimestamp="2025-12-16 13:10:46 +0000 UTC" firstStartedPulling="2025-12-16 13:10:47.671339016 +0000 UTC m=+8.786880925" lastFinishedPulling="2025-12-16 13:10:49.446407942 +0000 UTC m=+10.561949861" observedRunningTime="2025-12-16 13:10:50.042313733 +0000 UTC m=+11.157855657" watchObservedRunningTime="2025-12-16 13:10:50.498139147 +0000 UTC m=+11.613681067" Dec 16 13:10:54.827447 sudo[3015]: pam_unix(sudo:session): session closed for user root Dec 16 13:10:54.828000 audit[3015]: USER_END pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.830766 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 13:10:54.831041 kernel: audit: type=1106 audit(1765890654.828:543): pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.828000 audit[3015]: CRED_DISP pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.838745 kernel: audit: type=1104 audit(1765890654.828:544): pid=3015 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.934127 sshd[3014]: Connection closed by 10.200.16.10 port 48208 Dec 16 13:10:54.935002 sshd-session[3010]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:54.935000 audit[3010]: USER_END pid=3010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:54.941797 kernel: audit: type=1106 audit(1765890654.935:545): pid=3010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:54.941666 systemd-logind[2540]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:10:54.942726 systemd[1]: sshd@6-10.200.8.11:22-10.200.16.10:48208.service: Deactivated successfully. Dec 16 13:10:54.936000 audit[3010]: CRED_DISP pid=3010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:54.947719 kernel: audit: type=1104 audit(1765890654.936:546): pid=3010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:10:54.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.11:22-10.200.16.10:48208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.951280 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:10:54.951779 systemd[1]: session-10.scope: Consumed 2.670s CPU time, 229.5M memory peak. Dec 16 13:10:54.953728 kernel: audit: type=1131 audit(1765890654.943:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.11:22-10.200.16.10:48208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:54.956266 systemd-logind[2540]: Removed session 10. Dec 16 13:10:55.597000 audit[4402]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.601729 kernel: audit: type=1325 audit(1765890655.597:548): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.601801 kernel: audit: type=1300 audit(1765890655.597:548): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff0a1eb190 a2=0 a3=7fff0a1eb17c items=0 ppid=4121 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.597000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff0a1eb190 a2=0 a3=7fff0a1eb17c items=0 ppid=4121 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:55.612726 kernel: audit: type=1327 audit(1765890655.597:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:55.601000 audit[4402]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.616727 kernel: audit: type=1325 audit(1765890655.601:549): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.601000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0a1eb190 a2=0 a3=0 items=0 ppid=4121 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.622755 kernel: audit: type=1300 audit(1765890655.601:549): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0a1eb190 a2=0 a3=0 items=0 ppid=4121 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:55.631000 audit[4404]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.631000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe96635330 a2=0 a3=7ffe9663531c items=0 ppid=4121 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:55.635000 audit[4404]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:55.635000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe96635330 a2=0 a3=0 items=0 ppid=4121 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:55.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:57.495000 audit[4406]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:57.495000 audit[4406]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe389d65a0 a2=0 a3=7ffe389d658c items=0 ppid=4121 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:57.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:57.497000 audit[4406]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:57.497000 audit[4406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe389d65a0 a2=0 a3=0 items=0 ppid=4121 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:57.497000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:57.629000 audit[4408]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:57.629000 audit[4408]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe56094800 a2=0 a3=7ffe560947ec items=0 ppid=4121 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:57.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:57.635000 audit[4408]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:57.635000 audit[4408]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe56094800 a2=0 a3=0 items=0 ppid=4121 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:57.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:58.649000 audit[4410]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:58.649000 audit[4410]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2d560350 a2=0 a3=7fff2d56033c items=0 ppid=4121 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:58.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:58.654000 audit[4410]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:58.654000 audit[4410]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2d560350 a2=0 a3=0 items=0 ppid=4121 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:58.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:59.028170 systemd[1]: Created slice kubepods-besteffort-podced74d09_4b72_4c85_b86b_9369e81f024c.slice - libcontainer container kubepods-besteffort-podced74d09_4b72_4c85_b86b_9369e81f024c.slice. Dec 16 13:10:59.093851 kubelet[3986]: I1216 13:10:59.093829 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ced74d09-4b72-4c85-b86b-9369e81f024c-typha-certs\") pod \"calico-typha-59c468dc9b-c6hvg\" (UID: \"ced74d09-4b72-4c85-b86b-9369e81f024c\") " pod="calico-system/calico-typha-59c468dc9b-c6hvg" Dec 16 13:10:59.094167 kubelet[3986]: I1216 13:10:59.094127 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ced74d09-4b72-4c85-b86b-9369e81f024c-tigera-ca-bundle\") pod \"calico-typha-59c468dc9b-c6hvg\" (UID: \"ced74d09-4b72-4c85-b86b-9369e81f024c\") " pod="calico-system/calico-typha-59c468dc9b-c6hvg" Dec 16 13:10:59.094167 kubelet[3986]: I1216 13:10:59.094146 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbsq\" (UniqueName: \"kubernetes.io/projected/ced74d09-4b72-4c85-b86b-9369e81f024c-kube-api-access-fhbsq\") pod \"calico-typha-59c468dc9b-c6hvg\" (UID: \"ced74d09-4b72-4c85-b86b-9369e81f024c\") " pod="calico-system/calico-typha-59c468dc9b-c6hvg" Dec 16 13:10:59.203039 kubelet[3986]: W1216 13:10:59.202910 3986 reflector.go:569] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4547.0.0-a-e647365c22" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object Dec 16 13:10:59.203039 kubelet[3986]: E1216 13:10:59.202946 3986 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" logger="UnhandledError" Dec 16 13:10:59.203039 kubelet[3986]: I1216 13:10:59.202980 3986 status_manager.go:890] "Failed to get status for pod" podUID="5d2fb090-a30d-4f98-bcef-0e64c3eba3e5" pod="calico-system/calico-node-s7774" err="pods \"calico-node-s7774\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" Dec 16 13:10:59.203039 kubelet[3986]: W1216 13:10:59.203013 3986 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4547.0.0-a-e647365c22" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object Dec 16 13:10:59.203039 kubelet[3986]: E1216 13:10:59.203023 3986 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4547.0.0-a-e647365c22\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.0.0-a-e647365c22' and this object" logger="UnhandledError" Dec 16 13:10:59.209981 systemd[1]: Created slice kubepods-besteffort-pod5d2fb090_a30d_4f98_bcef_0e64c3eba3e5.slice - libcontainer container kubepods-besteffort-pod5d2fb090_a30d_4f98_bcef_0e64c3eba3e5.slice. Dec 16 13:10:59.295548 kubelet[3986]: I1216 13:10:59.295194 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-lib-modules\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295548 kubelet[3986]: I1216 13:10:59.295220 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-policysync\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295548 kubelet[3986]: I1216 13:10:59.295235 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-var-lib-calico\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295548 kubelet[3986]: I1216 13:10:59.295252 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-flexvol-driver-host\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295548 kubelet[3986]: I1216 13:10:59.295266 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjlg6\" (UniqueName: \"kubernetes.io/projected/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-kube-api-access-qjlg6\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295728 kubelet[3986]: I1216 13:10:59.295279 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-var-run-calico\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295728 kubelet[3986]: I1216 13:10:59.295292 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-cni-bin-dir\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295728 kubelet[3986]: I1216 13:10:59.295305 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-cni-log-dir\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295728 kubelet[3986]: I1216 13:10:59.295316 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-cni-net-dir\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295728 kubelet[3986]: I1216 13:10:59.295330 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-node-certs\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295803 kubelet[3986]: I1216 13:10:59.295346 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-tigera-ca-bundle\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.295803 kubelet[3986]: I1216 13:10:59.295363 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d2fb090-a30d-4f98-bcef-0e64c3eba3e5-xtables-lock\") pod \"calico-node-s7774\" (UID: \"5d2fb090-a30d-4f98-bcef-0e64c3eba3e5\") " pod="calico-system/calico-node-s7774" Dec 16 13:10:59.333159 containerd[2582]: time="2025-12-16T13:10:59.333129456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c468dc9b-c6hvg,Uid:ced74d09-4b72-4c85-b86b-9369e81f024c,Namespace:calico-system,Attempt:0,}" Dec 16 13:10:59.377015 containerd[2582]: time="2025-12-16T13:10:59.376988427Z" level=info msg="connecting to shim 75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3" address="unix:///run/containerd/s/1300d8229b3166c786cf6b2757521d84e276a2eee941e7693717775608f3b226" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:59.400884 kubelet[3986]: E1216 13:10:59.400862 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.400960 kubelet[3986]: W1216 13:10:59.400888 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.400960 kubelet[3986]: E1216 13:10:59.400913 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.412846 systemd[1]: Started cri-containerd-75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3.scope - libcontainer container 75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3. Dec 16 13:10:59.418516 kubelet[3986]: E1216 13:10:59.411858 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:10:59.421461 kubelet[3986]: E1216 13:10:59.421437 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.421461 kubelet[3986]: W1216 13:10:59.421454 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.421556 kubelet[3986]: E1216 13:10:59.421469 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.452000 audit: BPF prog-id=175 op=LOAD Dec 16 13:10:59.453000 audit: BPF prog-id=176 op=LOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=177 op=LOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=178 op=LOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=177 op=UNLOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.453000 audit: BPF prog-id=179 op=LOAD Dec 16 13:10:59.453000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735633636343764623536666465383031313436323964313832366565 Dec 16 13:10:59.487683 kubelet[3986]: E1216 13:10:59.487664 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.487683 kubelet[3986]: W1216 13:10:59.487678 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.487807 kubelet[3986]: E1216 13:10:59.487690 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.488325 kubelet[3986]: E1216 13:10:59.488303 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.488382 kubelet[3986]: W1216 13:10:59.488324 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.488382 kubelet[3986]: E1216 13:10:59.488353 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.488672 kubelet[3986]: E1216 13:10:59.488620 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.488672 kubelet[3986]: W1216 13:10:59.488634 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.488672 kubelet[3986]: E1216 13:10:59.488644 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.488978 kubelet[3986]: E1216 13:10:59.488960 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.488978 kubelet[3986]: W1216 13:10:59.488978 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.489066 kubelet[3986]: E1216 13:10:59.488987 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.489177 kubelet[3986]: E1216 13:10:59.489167 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.489177 kubelet[3986]: W1216 13:10:59.489175 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.489264 kubelet[3986]: E1216 13:10:59.489183 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.489495 kubelet[3986]: E1216 13:10:59.489457 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.489495 kubelet[3986]: W1216 13:10:59.489490 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.489835 kubelet[3986]: E1216 13:10:59.489498 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.489835 kubelet[3986]: E1216 13:10:59.489624 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.489835 kubelet[3986]: W1216 13:10:59.489629 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.489835 kubelet[3986]: E1216 13:10:59.489636 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.489835 kubelet[3986]: E1216 13:10:59.489758 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.489835 kubelet[3986]: W1216 13:10:59.489763 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.489835 kubelet[3986]: E1216 13:10:59.489770 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.489871 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490665 kubelet[3986]: W1216 13:10:59.489875 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.489881 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.489957 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490665 kubelet[3986]: W1216 13:10:59.489961 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.489966 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.490046 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490665 kubelet[3986]: W1216 13:10:59.490050 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.490055 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490665 kubelet[3986]: E1216 13:10:59.490138 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490920 kubelet[3986]: W1216 13:10:59.490143 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490148 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490262 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490920 kubelet[3986]: W1216 13:10:59.490266 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490272 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490438 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490920 kubelet[3986]: W1216 13:10:59.490445 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490451 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.490920 kubelet[3986]: E1216 13:10:59.490782 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.490920 kubelet[3986]: W1216 13:10:59.490792 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.490802 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.490911 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.491133 kubelet[3986]: W1216 13:10:59.490915 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.490921 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.491019 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.491133 kubelet[3986]: W1216 13:10:59.491022 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.491028 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.491105 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.491133 kubelet[3986]: W1216 13:10:59.491109 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491133 kubelet[3986]: E1216 13:10:59.491113 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.491333 kubelet[3986]: E1216 13:10:59.491185 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.491333 kubelet[3986]: W1216 13:10:59.491189 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491333 kubelet[3986]: E1216 13:10:59.491194 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.491333 kubelet[3986]: E1216 13:10:59.491269 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.491333 kubelet[3986]: W1216 13:10:59.491273 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.491333 kubelet[3986]: E1216 13:10:59.491278 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.492543 containerd[2582]: time="2025-12-16T13:10:59.492507450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c468dc9b-c6hvg,Uid:ced74d09-4b72-4c85-b86b-9369e81f024c,Namespace:calico-system,Attempt:0,} returns sandbox id \"75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3\"" Dec 16 13:10:59.494074 containerd[2582]: time="2025-12-16T13:10:59.493985895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:10:59.498289 kubelet[3986]: E1216 13:10:59.498274 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.498289 kubelet[3986]: W1216 13:10:59.498288 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.498374 kubelet[3986]: E1216 13:10:59.498300 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.498374 kubelet[3986]: I1216 13:10:59.498344 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40ac25d7-4601-4254-b29f-0ca4ec170f77-socket-dir\") pod \"csi-node-driver-tj5zh\" (UID: \"40ac25d7-4601-4254-b29f-0ca4ec170f77\") " pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:10:59.498579 kubelet[3986]: E1216 13:10:59.498567 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.498614 kubelet[3986]: W1216 13:10:59.498580 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.498614 kubelet[3986]: E1216 13:10:59.498595 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.498655 kubelet[3986]: I1216 13:10:59.498612 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40ac25d7-4601-4254-b29f-0ca4ec170f77-kubelet-dir\") pod \"csi-node-driver-tj5zh\" (UID: \"40ac25d7-4601-4254-b29f-0ca4ec170f77\") " pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:10:59.498788 kubelet[3986]: E1216 13:10:59.498779 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.498822 kubelet[3986]: W1216 13:10:59.498788 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.498977 kubelet[3986]: E1216 13:10:59.498964 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.499041 kubelet[3986]: E1216 13:10:59.499018 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.499062 kubelet[3986]: W1216 13:10:59.499043 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.499062 kubelet[3986]: E1216 13:10:59.499053 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.499101 kubelet[3986]: I1216 13:10:59.499070 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40ac25d7-4601-4254-b29f-0ca4ec170f77-registration-dir\") pod \"csi-node-driver-tj5zh\" (UID: \"40ac25d7-4601-4254-b29f-0ca4ec170f77\") " pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:10:59.499389 kubelet[3986]: E1216 13:10:59.499232 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.499389 kubelet[3986]: W1216 13:10:59.499241 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.499389 kubelet[3986]: E1216 13:10:59.499272 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.499482 kubelet[3986]: E1216 13:10:59.499409 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.499482 kubelet[3986]: W1216 13:10:59.499415 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.499482 kubelet[3986]: E1216 13:10:59.499438 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.499719 kubelet[3986]: E1216 13:10:59.499629 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.499719 kubelet[3986]: W1216 13:10:59.499659 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.499719 kubelet[3986]: E1216 13:10:59.499672 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.499719 kubelet[3986]: I1216 13:10:59.499691 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6fx\" (UniqueName: \"kubernetes.io/projected/40ac25d7-4601-4254-b29f-0ca4ec170f77-kube-api-access-kl6fx\") pod \"csi-node-driver-tj5zh\" (UID: \"40ac25d7-4601-4254-b29f-0ca4ec170f77\") " pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.499916 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.500529 kubelet[3986]: W1216 13:10:59.499924 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.499934 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.500088 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.500529 kubelet[3986]: W1216 13:10:59.500094 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.500102 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.500333 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.500529 kubelet[3986]: W1216 13:10:59.500342 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.500529 kubelet[3986]: E1216 13:10:59.500360 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.500794 kubelet[3986]: E1216 13:10:59.500585 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.500794 kubelet[3986]: W1216 13:10:59.500591 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.500794 kubelet[3986]: E1216 13:10:59.500605 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.500794 kubelet[3986]: E1216 13:10:59.500767 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.500794 kubelet[3986]: W1216 13:10:59.500773 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.500794 kubelet[3986]: E1216 13:10:59.500783 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.501057 kubelet[3986]: E1216 13:10:59.500926 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.501057 kubelet[3986]: W1216 13:10:59.500932 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.501057 kubelet[3986]: E1216 13:10:59.500938 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.501057 kubelet[3986]: I1216 13:10:59.500957 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40ac25d7-4601-4254-b29f-0ca4ec170f77-varrun\") pod \"csi-node-driver-tj5zh\" (UID: \"40ac25d7-4601-4254-b29f-0ca4ec170f77\") " pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:10:59.501160 kubelet[3986]: E1216 13:10:59.501106 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.501160 kubelet[3986]: W1216 13:10:59.501112 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.501160 kubelet[3986]: E1216 13:10:59.501119 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.501579 kubelet[3986]: E1216 13:10:59.501245 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.501579 kubelet[3986]: W1216 13:10:59.501250 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.501579 kubelet[3986]: E1216 13:10:59.501256 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602270 kubelet[3986]: E1216 13:10:59.602251 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602270 kubelet[3986]: W1216 13:10:59.602265 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.602365 kubelet[3986]: E1216 13:10:59.602276 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602405 kubelet[3986]: E1216 13:10:59.602393 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602405 kubelet[3986]: W1216 13:10:59.602401 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.602451 kubelet[3986]: E1216 13:10:59.602409 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602543 kubelet[3986]: E1216 13:10:59.602532 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602543 kubelet[3986]: W1216 13:10:59.602539 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.602586 kubelet[3986]: E1216 13:10:59.602555 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602684 kubelet[3986]: E1216 13:10:59.602670 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602684 kubelet[3986]: W1216 13:10:59.602680 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.602767 kubelet[3986]: E1216 13:10:59.602691 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602808 kubelet[3986]: E1216 13:10:59.602800 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602808 kubelet[3986]: W1216 13:10:59.602805 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.602859 kubelet[3986]: E1216 13:10:59.602814 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.602978 kubelet[3986]: E1216 13:10:59.602960 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.602978 kubelet[3986]: W1216 13:10:59.602976 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603036 kubelet[3986]: E1216 13:10:59.602985 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603099 kubelet[3986]: E1216 13:10:59.603076 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603099 kubelet[3986]: W1216 13:10:59.603097 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603160 kubelet[3986]: E1216 13:10:59.603106 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603222 kubelet[3986]: E1216 13:10:59.603210 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603222 kubelet[3986]: W1216 13:10:59.603216 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603280 kubelet[3986]: E1216 13:10:59.603224 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603308 kubelet[3986]: E1216 13:10:59.603303 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603387 kubelet[3986]: W1216 13:10:59.603307 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603387 kubelet[3986]: E1216 13:10:59.603313 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603461 kubelet[3986]: E1216 13:10:59.603437 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603461 kubelet[3986]: W1216 13:10:59.603458 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603514 kubelet[3986]: E1216 13:10:59.603467 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603593 kubelet[3986]: E1216 13:10:59.603581 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603593 kubelet[3986]: W1216 13:10:59.603591 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603660 kubelet[3986]: E1216 13:10:59.603619 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603721 kubelet[3986]: E1216 13:10:59.603667 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603721 kubelet[3986]: W1216 13:10:59.603671 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603721 kubelet[3986]: E1216 13:10:59.603683 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603811 kubelet[3986]: E1216 13:10:59.603754 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603811 kubelet[3986]: W1216 13:10:59.603759 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603811 kubelet[3986]: E1216 13:10:59.603769 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.603914 kubelet[3986]: E1216 13:10:59.603836 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.603914 kubelet[3986]: W1216 13:10:59.603841 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.603914 kubelet[3986]: E1216 13:10:59.603850 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604033 kubelet[3986]: E1216 13:10:59.603918 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604033 kubelet[3986]: W1216 13:10:59.603922 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604033 kubelet[3986]: E1216 13:10:59.603931 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604138 kubelet[3986]: E1216 13:10:59.604123 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604138 kubelet[3986]: W1216 13:10:59.604134 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604187 kubelet[3986]: E1216 13:10:59.604146 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604300 kubelet[3986]: E1216 13:10:59.604277 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604300 kubelet[3986]: W1216 13:10:59.604298 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604343 kubelet[3986]: E1216 13:10:59.604311 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604447 kubelet[3986]: E1216 13:10:59.604424 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604447 kubelet[3986]: W1216 13:10:59.604444 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604489 kubelet[3986]: E1216 13:10:59.604456 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604611 kubelet[3986]: E1216 13:10:59.604594 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604611 kubelet[3986]: W1216 13:10:59.604609 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604650 kubelet[3986]: E1216 13:10:59.604622 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604892 kubelet[3986]: E1216 13:10:59.604789 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604892 kubelet[3986]: W1216 13:10:59.604796 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604892 kubelet[3986]: E1216 13:10:59.604802 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.604981 kubelet[3986]: E1216 13:10:59.604903 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.604981 kubelet[3986]: W1216 13:10:59.604907 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.604981 kubelet[3986]: E1216 13:10:59.604913 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.605040 kubelet[3986]: E1216 13:10:59.604996 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.605040 kubelet[3986]: W1216 13:10:59.604999 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.605040 kubelet[3986]: E1216 13:10:59.605005 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.605101 kubelet[3986]: E1216 13:10:59.605073 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.605101 kubelet[3986]: W1216 13:10:59.605078 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.605101 kubelet[3986]: E1216 13:10:59.605083 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.605647 kubelet[3986]: E1216 13:10:59.605179 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.605647 kubelet[3986]: W1216 13:10:59.605184 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.605647 kubelet[3986]: E1216 13:10:59.605189 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.605647 kubelet[3986]: E1216 13:10:59.605293 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.605647 kubelet[3986]: W1216 13:10:59.605299 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.605647 kubelet[3986]: E1216 13:10:59.605306 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.612288 kubelet[3986]: E1216 13:10:59.612233 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:10:59.612288 kubelet[3986]: W1216 13:10:59.612250 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:10:59.612288 kubelet[3986]: E1216 13:10:59.612263 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:10:59.663000 audit[4547]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:59.663000 audit[4547]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc2f511930 a2=0 a3=7ffc2f51191c items=0 ppid=4121 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:10:59.666000 audit[4547]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:10:59.666000 audit[4547]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2f511930 a2=0 a3=0 items=0 ppid=4121 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:59.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:00.341096 kubelet[3986]: E1216 13:11:00.341075 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:00.341096 kubelet[3986]: W1216 13:11:00.341093 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:00.341346 kubelet[3986]: E1216 13:11:00.341105 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:00.413873 containerd[2582]: time="2025-12-16T13:11:00.413848045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7774,Uid:5d2fb090-a30d-4f98-bcef-0e64c3eba3e5,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:00.467681 containerd[2582]: time="2025-12-16T13:11:00.467643946Z" level=info msg="connecting to shim ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029" address="unix:///run/containerd/s/5599ba4e756dd251531a8988cdd66e872edf2004e5d492c03cb3948ee2876117" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:00.492849 systemd[1]: Started cri-containerd-ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029.scope - libcontainer container ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029. Dec 16 13:11:00.500000 audit: BPF prog-id=180 op=LOAD Dec 16 13:11:00.502264 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 16 13:11:00.502320 kernel: audit: type=1334 audit(1765890660.500:568): prog-id=180 op=LOAD Dec 16 13:11:00.501000 audit: BPF prog-id=181 op=LOAD Dec 16 13:11:00.504495 kernel: audit: type=1334 audit(1765890660.501:569): prog-id=181 op=LOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.507647 kernel: audit: type=1300 audit(1765890660.501:569): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.515433 kernel: audit: type=1327 audit(1765890660.501:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.515490 kernel: audit: type=1334 audit(1765890660.501:570): prog-id=181 op=UNLOAD Dec 16 13:11:00.501000 audit: BPF prog-id=181 op=UNLOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.518598 kernel: audit: type=1300 audit(1765890660.501:570): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.524157 kernel: audit: type=1327 audit(1765890660.501:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.525477 kernel: audit: type=1334 audit(1765890660.501:571): prog-id=182 op=LOAD Dec 16 13:11:00.501000 audit: BPF prog-id=182 op=LOAD Dec 16 13:11:00.530154 kernel: audit: type=1300 audit(1765890660.501:571): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.534531 kernel: audit: type=1327 audit(1765890660.501:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: BPF prog-id=183 op=LOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.501000 audit: BPF prog-id=184 op=LOAD Dec 16 13:11:00.501000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4559 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:00.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666393838316535333136646432306134326438653537383433366539 Dec 16 13:11:00.544039 containerd[2582]: time="2025-12-16T13:11:00.544011436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7774,Uid:5d2fb090-a30d-4f98-bcef-0e64c3eba3e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\"" Dec 16 13:11:00.975270 kubelet[3986]: E1216 13:11:00.974459 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:01.199392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9098224.mount: Deactivated successfully. Dec 16 13:11:01.824331 containerd[2582]: time="2025-12-16T13:11:01.824302426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:01.828460 containerd[2582]: time="2025-12-16T13:11:01.828382780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:11:01.831222 containerd[2582]: time="2025-12-16T13:11:01.831201728Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:01.834730 containerd[2582]: time="2025-12-16T13:11:01.834618831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:01.834965 containerd[2582]: time="2025-12-16T13:11:01.834889905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.340877103s" Dec 16 13:11:01.834965 containerd[2582]: time="2025-12-16T13:11:01.834914265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:11:01.835760 containerd[2582]: time="2025-12-16T13:11:01.835738761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:11:01.846687 containerd[2582]: time="2025-12-16T13:11:01.846571320Z" level=info msg="CreateContainer within sandbox \"75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:11:01.865984 containerd[2582]: time="2025-12-16T13:11:01.865963507Z" level=info msg="Container 4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:01.886047 containerd[2582]: time="2025-12-16T13:11:01.886022053Z" level=info msg="CreateContainer within sandbox \"75c6647db56fde80114629d1826eef7b0002ee742e6c255455aa45a4054df4e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e\"" Dec 16 13:11:01.887016 containerd[2582]: time="2025-12-16T13:11:01.886480070Z" level=info msg="StartContainer for \"4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e\"" Dec 16 13:11:01.887515 containerd[2582]: time="2025-12-16T13:11:01.887493074Z" level=info msg="connecting to shim 4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e" address="unix:///run/containerd/s/1300d8229b3166c786cf6b2757521d84e276a2eee941e7693717775608f3b226" protocol=ttrpc version=3 Dec 16 13:11:01.901893 systemd[1]: Started cri-containerd-4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e.scope - libcontainer container 4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e. Dec 16 13:11:01.910000 audit: BPF prog-id=185 op=LOAD Dec 16 13:11:01.911000 audit: BPF prog-id=186 op=LOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=186 op=UNLOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=187 op=LOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=188 op=LOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=187 op=UNLOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.911000 audit: BPF prog-id=189 op=LOAD Dec 16 13:11:01.911000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4421 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:01.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464346238353461356263373734616238313435303032353663363664 Dec 16 13:11:01.942637 containerd[2582]: time="2025-12-16T13:11:01.942613907Z" level=info msg="StartContainer for \"4d4b854a5bc774ab814500256c66da674b1bd39ad246676e5712bf85d7efaa9e\" returns successfully" Dec 16 13:11:02.105535 kubelet[3986]: E1216 13:11:02.104922 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.105535 kubelet[3986]: W1216 13:11:02.104950 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.105535 kubelet[3986]: E1216 13:11:02.104966 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.105926 kubelet[3986]: E1216 13:11:02.105910 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.105955 kubelet[3986]: W1216 13:11:02.105928 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.105974 kubelet[3986]: E1216 13:11:02.105953 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106105 kubelet[3986]: E1216 13:11:02.106094 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106105 kubelet[3986]: W1216 13:11:02.106106 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.106169 kubelet[3986]: E1216 13:11:02.106115 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106277 kubelet[3986]: E1216 13:11:02.106268 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106296 kubelet[3986]: W1216 13:11:02.106277 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.106296 kubelet[3986]: E1216 13:11:02.106285 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106405 kubelet[3986]: E1216 13:11:02.106396 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106428 kubelet[3986]: W1216 13:11:02.106405 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.106428 kubelet[3986]: E1216 13:11:02.106412 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106509 kubelet[3986]: E1216 13:11:02.106502 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106532 kubelet[3986]: W1216 13:11:02.106510 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.106532 kubelet[3986]: E1216 13:11:02.106515 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106612 kubelet[3986]: E1216 13:11:02.106606 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106633 kubelet[3986]: W1216 13:11:02.106613 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.106633 kubelet[3986]: E1216 13:11:02.106619 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.106729 kubelet[3986]: E1216 13:11:02.106723 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.106729 kubelet[3986]: W1216 13:11:02.106729 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.106735 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.106838 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107184 kubelet[3986]: W1216 13:11:02.106842 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.106848 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.106928 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107184 kubelet[3986]: W1216 13:11:02.106933 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.106946 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.107022 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107184 kubelet[3986]: W1216 13:11:02.107026 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107184 kubelet[3986]: E1216 13:11:02.107031 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107114 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107395 kubelet[3986]: W1216 13:11:02.107118 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107123 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107218 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107395 kubelet[3986]: W1216 13:11:02.107223 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107230 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107332 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.107395 kubelet[3986]: W1216 13:11:02.107336 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.107395 kubelet[3986]: E1216 13:11:02.107341 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.108283 kubelet[3986]: E1216 13:11:02.107424 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.108283 kubelet[3986]: W1216 13:11:02.107428 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.108283 kubelet[3986]: E1216 13:11:02.107434 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.122257 kubelet[3986]: E1216 13:11:02.121947 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.122257 kubelet[3986]: W1216 13:11:02.121960 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.122257 kubelet[3986]: E1216 13:11:02.121972 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.123896 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124329 kubelet[3986]: W1216 13:11:02.123910 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.123926 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.124053 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124329 kubelet[3986]: W1216 13:11:02.124060 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.124067 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.124213 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124329 kubelet[3986]: W1216 13:11:02.124218 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.124228 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124329 kubelet[3986]: E1216 13:11:02.124328 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124586 kubelet[3986]: W1216 13:11:02.124332 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124586 kubelet[3986]: E1216 13:11:02.124343 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124586 kubelet[3986]: E1216 13:11:02.124464 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124586 kubelet[3986]: W1216 13:11:02.124481 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124586 kubelet[3986]: E1216 13:11:02.124494 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124844 kubelet[3986]: E1216 13:11:02.124729 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124844 kubelet[3986]: W1216 13:11:02.124736 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124844 kubelet[3986]: E1216 13:11:02.124745 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.124929 kubelet[3986]: E1216 13:11:02.124892 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.124929 kubelet[3986]: W1216 13:11:02.124899 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.124929 kubelet[3986]: E1216 13:11:02.124911 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125189 kubelet[3986]: E1216 13:11:02.125017 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.125189 kubelet[3986]: W1216 13:11:02.125032 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.125189 kubelet[3986]: E1216 13:11:02.125045 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125189 kubelet[3986]: E1216 13:11:02.125144 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.125189 kubelet[3986]: W1216 13:11:02.125149 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.125189 kubelet[3986]: E1216 13:11:02.125160 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125261 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.125850 kubelet[3986]: W1216 13:11:02.125266 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125273 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125361 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.125850 kubelet[3986]: W1216 13:11:02.125365 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125371 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125455 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.125850 kubelet[3986]: W1216 13:11:02.125468 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125474 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.125850 kubelet[3986]: E1216 13:11:02.125607 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.126039 kubelet[3986]: W1216 13:11:02.125612 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.126039 kubelet[3986]: E1216 13:11:02.125619 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.126039 kubelet[3986]: E1216 13:11:02.125757 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.126039 kubelet[3986]: W1216 13:11:02.125764 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.126039 kubelet[3986]: E1216 13:11:02.125779 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.126039 kubelet[3986]: E1216 13:11:02.126033 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.126039 kubelet[3986]: W1216 13:11:02.126038 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.126189 kubelet[3986]: E1216 13:11:02.126048 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.126448 kubelet[3986]: E1216 13:11:02.126354 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.126448 kubelet[3986]: W1216 13:11:02.126365 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.126448 kubelet[3986]: E1216 13:11:02.126374 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.126538 kubelet[3986]: E1216 13:11:02.126473 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:02.126538 kubelet[3986]: W1216 13:11:02.126478 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:02.126538 kubelet[3986]: E1216 13:11:02.126484 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:02.318554 kubelet[3986]: I1216 13:11:02.317938 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59c468dc9b-c6hvg" podStartSLOduration=0.97602757 podStartE2EDuration="3.317926677s" podCreationTimestamp="2025-12-16 13:10:59 +0000 UTC" firstStartedPulling="2025-12-16 13:10:59.493667685 +0000 UTC m=+20.609209603" lastFinishedPulling="2025-12-16 13:11:01.835566799 +0000 UTC m=+22.951108710" observedRunningTime="2025-12-16 13:11:02.068755654 +0000 UTC m=+23.184297596" watchObservedRunningTime="2025-12-16 13:11:02.317926677 +0000 UTC m=+23.433468598" Dec 16 13:11:02.334000 audit[4683]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:02.334000 audit[4683]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca5fefb90 a2=0 a3=7ffca5fefb7c items=0 ppid=4121 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:02.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:02.338000 audit[4683]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:02.338000 audit[4683]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffca5fefb90 a2=0 a3=7ffca5fefb7c items=0 ppid=4121 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:02.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:02.973668 kubelet[3986]: E1216 13:11:02.973637 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:03.114810 kubelet[3986]: E1216 13:11:03.114783 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.114810 kubelet[3986]: W1216 13:11:03.114809 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.114827 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.114964 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115169 kubelet[3986]: W1216 13:11:03.114969 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.114977 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.115073 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115169 kubelet[3986]: W1216 13:11:03.115078 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.115084 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115169 kubelet[3986]: E1216 13:11:03.115171 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115366 kubelet[3986]: W1216 13:11:03.115175 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115366 kubelet[3986]: E1216 13:11:03.115181 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115366 kubelet[3986]: E1216 13:11:03.115272 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115366 kubelet[3986]: W1216 13:11:03.115276 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115366 kubelet[3986]: E1216 13:11:03.115282 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115845 kubelet[3986]: E1216 13:11:03.115829 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115845 kubelet[3986]: W1216 13:11:03.115845 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115935 kubelet[3986]: E1216 13:11:03.115855 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.115979 kubelet[3986]: E1216 13:11:03.115959 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.115979 kubelet[3986]: W1216 13:11:03.115968 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.115979 kubelet[3986]: E1216 13:11:03.115974 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.116071 kubelet[3986]: E1216 13:11:03.116061 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.116071 kubelet[3986]: W1216 13:11:03.116069 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.116112 kubelet[3986]: E1216 13:11:03.116074 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.117291 kubelet[3986]: E1216 13:11:03.117255 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.117291 kubelet[3986]: W1216 13:11:03.117276 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.117291 kubelet[3986]: E1216 13:11:03.117289 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117433 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119185 kubelet[3986]: W1216 13:11:03.117441 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117450 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117584 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119185 kubelet[3986]: W1216 13:11:03.117589 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117595 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117693 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119185 kubelet[3986]: W1216 13:11:03.117697 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117731 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119185 kubelet[3986]: E1216 13:11:03.117838 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119396 kubelet[3986]: W1216 13:11:03.117843 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119396 kubelet[3986]: E1216 13:11:03.117849 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119396 kubelet[3986]: E1216 13:11:03.117947 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119396 kubelet[3986]: W1216 13:11:03.117953 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119396 kubelet[3986]: E1216 13:11:03.117960 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.119396 kubelet[3986]: E1216 13:11:03.118043 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.119396 kubelet[3986]: W1216 13:11:03.118048 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.119396 kubelet[3986]: E1216 13:11:03.118054 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.131362 kubelet[3986]: E1216 13:11:03.131345 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.131362 kubelet[3986]: W1216 13:11:03.131358 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.131510 kubelet[3986]: E1216 13:11:03.131369 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.131510 kubelet[3986]: E1216 13:11:03.131496 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.131510 kubelet[3986]: W1216 13:11:03.131502 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.131638 kubelet[3986]: E1216 13:11:03.131510 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.131638 kubelet[3986]: E1216 13:11:03.131631 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.131638 kubelet[3986]: W1216 13:11:03.131636 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.131785 kubelet[3986]: E1216 13:11:03.131643 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.131822 kubelet[3986]: E1216 13:11:03.131810 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.131822 kubelet[3986]: W1216 13:11:03.131819 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.131960 kubelet[3986]: E1216 13:11:03.131948 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132101 kubelet[3986]: E1216 13:11:03.132091 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132101 kubelet[3986]: W1216 13:11:03.132100 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132150 kubelet[3986]: E1216 13:11:03.132112 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132226 kubelet[3986]: E1216 13:11:03.132216 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132226 kubelet[3986]: W1216 13:11:03.132224 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132308 kubelet[3986]: E1216 13:11:03.132281 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132335 kubelet[3986]: E1216 13:11:03.132327 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132335 kubelet[3986]: W1216 13:11:03.132332 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132375 kubelet[3986]: E1216 13:11:03.132338 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132436 kubelet[3986]: E1216 13:11:03.132428 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132436 kubelet[3986]: W1216 13:11:03.132435 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132483 kubelet[3986]: E1216 13:11:03.132441 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132561 kubelet[3986]: E1216 13:11:03.132541 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132585 kubelet[3986]: W1216 13:11:03.132561 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132585 kubelet[3986]: E1216 13:11:03.132569 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132714 kubelet[3986]: E1216 13:11:03.132690 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132741 kubelet[3986]: W1216 13:11:03.132720 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.132817 kubelet[3986]: E1216 13:11:03.132805 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.132951 kubelet[3986]: E1216 13:11:03.132942 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.132951 kubelet[3986]: W1216 13:11:03.132949 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133024 kubelet[3986]: E1216 13:11:03.133017 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.133142 kubelet[3986]: E1216 13:11:03.133133 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.133167 kubelet[3986]: W1216 13:11:03.133142 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133167 kubelet[3986]: E1216 13:11:03.133154 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.133311 kubelet[3986]: E1216 13:11:03.133301 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.133311 kubelet[3986]: W1216 13:11:03.133309 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133384 kubelet[3986]: E1216 13:11:03.133373 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.133485 kubelet[3986]: E1216 13:11:03.133473 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.133485 kubelet[3986]: W1216 13:11:03.133481 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133536 kubelet[3986]: E1216 13:11:03.133487 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.133650 kubelet[3986]: E1216 13:11:03.133570 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.133650 kubelet[3986]: W1216 13:11:03.133575 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133650 kubelet[3986]: E1216 13:11:03.133579 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.133733 kubelet[3986]: E1216 13:11:03.133671 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.133733 kubelet[3986]: W1216 13:11:03.133676 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.133733 kubelet[3986]: E1216 13:11:03.133681 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.134045 kubelet[3986]: E1216 13:11:03.134030 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.134045 kubelet[3986]: W1216 13:11:03.134042 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.134177 kubelet[3986]: E1216 13:11:03.134056 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.134177 kubelet[3986]: E1216 13:11:03.134164 3986 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:11:03.134177 kubelet[3986]: W1216 13:11:03.134169 3986 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:11:03.134177 kubelet[3986]: E1216 13:11:03.134175 3986 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:11:03.137301 containerd[2582]: time="2025-12-16T13:11:03.137273519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:03.139724 containerd[2582]: time="2025-12-16T13:11:03.139634139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:03.142511 containerd[2582]: time="2025-12-16T13:11:03.142486216Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:03.146497 containerd[2582]: time="2025-12-16T13:11:03.146142615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:03.146497 containerd[2582]: time="2025-12-16T13:11:03.146416810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.310649105s" Dec 16 13:11:03.146497 containerd[2582]: time="2025-12-16T13:11:03.146439073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:11:03.148259 containerd[2582]: time="2025-12-16T13:11:03.148233325Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:11:03.170855 containerd[2582]: time="2025-12-16T13:11:03.170827746Z" level=info msg="Container 258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:03.174273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407019861.mount: Deactivated successfully. Dec 16 13:11:03.187720 containerd[2582]: time="2025-12-16T13:11:03.187163287Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68\"" Dec 16 13:11:03.188359 containerd[2582]: time="2025-12-16T13:11:03.188337421Z" level=info msg="StartContainer for \"258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68\"" Dec 16 13:11:03.192554 containerd[2582]: time="2025-12-16T13:11:03.192530377Z" level=info msg="connecting to shim 258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68" address="unix:///run/containerd/s/5599ba4e756dd251531a8988cdd66e872edf2004e5d492c03cb3948ee2876117" protocol=ttrpc version=3 Dec 16 13:11:03.212327 systemd[1]: Started cri-containerd-258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68.scope - libcontainer container 258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68. Dec 16 13:11:03.237000 audit: BPF prog-id=190 op=LOAD Dec 16 13:11:03.237000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4559 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383837323633376561356437333666353664386362393664663839 Dec 16 13:11:03.237000 audit: BPF prog-id=191 op=LOAD Dec 16 13:11:03.237000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4559 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383837323633376561356437333666353664386362393664663839 Dec 16 13:11:03.237000 audit: BPF prog-id=191 op=UNLOAD Dec 16 13:11:03.237000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383837323633376561356437333666353664386362393664663839 Dec 16 13:11:03.237000 audit: BPF prog-id=190 op=UNLOAD Dec 16 13:11:03.237000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383837323633376561356437333666353664386362393664663839 Dec 16 13:11:03.237000 audit: BPF prog-id=192 op=LOAD Dec 16 13:11:03.237000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4559 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383837323633376561356437333666353664386362393664663839 Dec 16 13:11:03.257589 containerd[2582]: time="2025-12-16T13:11:03.257546263Z" level=info msg="StartContainer for \"258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68\" returns successfully" Dec 16 13:11:03.263442 systemd[1]: cri-containerd-258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68.scope: Deactivated successfully. Dec 16 13:11:03.265000 audit: BPF prog-id=192 op=UNLOAD Dec 16 13:11:03.267240 containerd[2582]: time="2025-12-16T13:11:03.267156127Z" level=info msg="received container exit event container_id:\"258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68\" id:\"258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68\" pid:4733 exited_at:{seconds:1765890663 nanos:266832963}" Dec 16 13:11:03.285096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-258872637ea5d736f56d8cb96df896a9e063714792874e44f5bc80038b113c68-rootfs.mount: Deactivated successfully. Dec 16 13:11:04.975183 kubelet[3986]: E1216 13:11:04.974071 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:06.063345 containerd[2582]: time="2025-12-16T13:11:06.063301626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:11:06.974276 kubelet[3986]: E1216 13:11:06.973724 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:08.974813 kubelet[3986]: E1216 13:11:08.974776 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:09.405047 containerd[2582]: time="2025-12-16T13:11:09.405013137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:09.407596 containerd[2582]: time="2025-12-16T13:11:09.407564214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:11:09.410378 containerd[2582]: time="2025-12-16T13:11:09.410337608Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:09.414022 containerd[2582]: time="2025-12-16T13:11:09.413977588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:09.414592 containerd[2582]: time="2025-12-16T13:11:09.414308928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.350976215s" Dec 16 13:11:09.414592 containerd[2582]: time="2025-12-16T13:11:09.414332629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:11:09.416439 containerd[2582]: time="2025-12-16T13:11:09.416411278Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:11:09.436369 containerd[2582]: time="2025-12-16T13:11:09.433108541Z" level=info msg="Container e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:09.455794 containerd[2582]: time="2025-12-16T13:11:09.455730714Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1\"" Dec 16 13:11:09.461802 containerd[2582]: time="2025-12-16T13:11:09.460041578Z" level=info msg="StartContainer for \"e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1\"" Dec 16 13:11:09.461802 containerd[2582]: time="2025-12-16T13:11:09.461113923Z" level=info msg="connecting to shim e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1" address="unix:///run/containerd/s/5599ba4e756dd251531a8988cdd66e872edf2004e5d492c03cb3948ee2876117" protocol=ttrpc version=3 Dec 16 13:11:09.485879 systemd[1]: Started cri-containerd-e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1.scope - libcontainer container e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1. Dec 16 13:11:09.521000 audit: BPF prog-id=193 op=LOAD Dec 16 13:11:09.524116 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 16 13:11:09.524176 kernel: audit: type=1334 audit(1765890669.521:592): prog-id=193 op=LOAD Dec 16 13:11:09.521000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.528738 kernel: audit: type=1300 audit(1765890669.521:592): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.533651 kernel: audit: type=1327 audit(1765890669.521:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.523000 audit: BPF prog-id=194 op=LOAD Dec 16 13:11:09.538745 kernel: audit: type=1334 audit(1765890669.523:593): prog-id=194 op=LOAD Dec 16 13:11:09.538804 kernel: audit: type=1300 audit(1765890669.523:593): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.523000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.551611 kernel: audit: type=1327 audit(1765890669.523:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.551669 kernel: audit: type=1334 audit(1765890669.523:594): prog-id=194 op=UNLOAD Dec 16 13:11:09.523000 audit: BPF prog-id=194 op=UNLOAD Dec 16 13:11:09.523000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.554757 kernel: audit: type=1300 audit(1765890669.523:594): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.558409 kernel: audit: type=1327 audit(1765890669.523:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.523000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:11:09.560836 kernel: audit: type=1334 audit(1765890669.523:595): prog-id=193 op=UNLOAD Dec 16 13:11:09.523000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.523000 audit: BPF prog-id=195 op=LOAD Dec 16 13:11:09.523000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4559 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535343033653238306466306662633136326361393937343533393139 Dec 16 13:11:09.567014 containerd[2582]: time="2025-12-16T13:11:09.566994851Z" level=info msg="StartContainer for \"e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1\" returns successfully" Dec 16 13:11:10.766630 containerd[2582]: time="2025-12-16T13:11:10.766582722Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:11:10.768442 systemd[1]: cri-containerd-e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1.scope: Deactivated successfully. Dec 16 13:11:10.768798 systemd[1]: cri-containerd-e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1.scope: Consumed 354ms CPU time, 192.3M memory peak, 171.3M written to disk. Dec 16 13:11:10.770097 containerd[2582]: time="2025-12-16T13:11:10.770067646Z" level=info msg="received container exit event container_id:\"e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1\" id:\"e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1\" pid:4790 exited_at:{seconds:1765890670 nanos:769853664}" Dec 16 13:11:10.771000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:11:10.787544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5403e280df0fbc162ca997453919f0690818c5c052b1ca322c180f236491bc1-rootfs.mount: Deactivated successfully. Dec 16 13:11:10.839946 kubelet[3986]: I1216 13:11:10.839924 3986 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:11:10.882816 systemd[1]: Created slice kubepods-besteffort-pod2527a715_8ea0_4d0b_a053_0e471ff72634.slice - libcontainer container kubepods-besteffort-pod2527a715_8ea0_4d0b_a053_0e471ff72634.slice. Dec 16 13:11:10.897060 systemd[1]: Created slice kubepods-burstable-pod9b89c7b3_c47a_4d49_a11e_8c488d12250e.slice - libcontainer container kubepods-burstable-pod9b89c7b3_c47a_4d49_a11e_8c488d12250e.slice. Dec 16 13:11:10.903439 systemd[1]: Created slice kubepods-besteffort-podc9d340d2_955d_4739_b7cb_fc9a188575e3.slice - libcontainer container kubepods-besteffort-podc9d340d2_955d_4739_b7cb_fc9a188575e3.slice. Dec 16 13:11:10.909201 systemd[1]: Created slice kubepods-burstable-pod2c4149fe_258b_4d56_9699_3c8027ef2524.slice - libcontainer container kubepods-burstable-pod2c4149fe_258b_4d56_9699_3c8027ef2524.slice. Dec 16 13:11:10.917895 systemd[1]: Created slice kubepods-besteffort-podf055798a_e699_42ea_8c24_f7896c6361d5.slice - libcontainer container kubepods-besteffort-podf055798a_e699_42ea_8c24_f7896c6361d5.slice. Dec 16 13:11:10.927593 systemd[1]: Created slice kubepods-besteffort-pod4125d4ae_25c4_4e66_8dbd_1294c8e7e14d.slice - libcontainer container kubepods-besteffort-pod4125d4ae_25c4_4e66_8dbd_1294c8e7e14d.slice. Dec 16 13:11:10.935151 systemd[1]: Created slice kubepods-besteffort-pod02384784_68b8_42d3_aba8_a97ba2d37c12.slice - libcontainer container kubepods-besteffort-pod02384784_68b8_42d3_aba8_a97ba2d37c12.slice. Dec 16 13:11:10.980956 systemd[1]: Created slice kubepods-besteffort-pod40ac25d7_4601_4254_b29f_0ca4ec170f77.slice - libcontainer container kubepods-besteffort-pod40ac25d7_4601_4254_b29f_0ca4ec170f77.slice. Dec 16 13:11:10.982786 containerd[2582]: time="2025-12-16T13:11:10.982763638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tj5zh,Uid:40ac25d7-4601-4254-b29f-0ca4ec170f77,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:10.984062 kubelet[3986]: I1216 13:11:10.984043 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02384784-68b8-42d3-aba8-a97ba2d37c12-calico-apiserver-certs\") pod \"calico-apiserver-67f6997d77-8hwmz\" (UID: \"02384784-68b8-42d3-aba8-a97ba2d37c12\") " pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" Dec 16 13:11:10.984135 kubelet[3986]: I1216 13:11:10.984073 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9d340d2-955d-4739-b7cb-fc9a188575e3-goldmane-key-pair\") pod \"goldmane-666569f655-7sznl\" (UID: \"c9d340d2-955d-4739-b7cb-fc9a188575e3\") " pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:10.984135 kubelet[3986]: I1216 13:11:10.984089 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljlh\" (UniqueName: \"kubernetes.io/projected/f055798a-e699-42ea-8c24-f7896c6361d5-kube-api-access-dljlh\") pod \"calico-apiserver-67f6997d77-z274w\" (UID: \"f055798a-e699-42ea-8c24-f7896c6361d5\") " pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" Dec 16 13:11:10.984135 kubelet[3986]: I1216 13:11:10.984109 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59k48\" (UniqueName: \"kubernetes.io/projected/2527a715-8ea0-4d0b-a053-0e471ff72634-kube-api-access-59k48\") pod \"calico-kube-controllers-5469dcf444-55tl9\" (UID: \"2527a715-8ea0-4d0b-a053-0e471ff72634\") " pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" Dec 16 13:11:10.984135 kubelet[3986]: I1216 13:11:10.984125 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c4149fe-258b-4d56-9699-3c8027ef2524-config-volume\") pod \"coredns-668d6bf9bc-q8vs8\" (UID: \"2c4149fe-258b-4d56-9699-3c8027ef2524\") " pod="kube-system/coredns-668d6bf9bc-q8vs8" Dec 16 13:11:10.984316 kubelet[3986]: I1216 13:11:10.984143 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzznm\" (UniqueName: \"kubernetes.io/projected/c9d340d2-955d-4739-b7cb-fc9a188575e3-kube-api-access-lzznm\") pod \"goldmane-666569f655-7sznl\" (UID: \"c9d340d2-955d-4739-b7cb-fc9a188575e3\") " pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:10.984316 kubelet[3986]: I1216 13:11:10.984161 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ftln\" (UniqueName: \"kubernetes.io/projected/02384784-68b8-42d3-aba8-a97ba2d37c12-kube-api-access-8ftln\") pod \"calico-apiserver-67f6997d77-8hwmz\" (UID: \"02384784-68b8-42d3-aba8-a97ba2d37c12\") " pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" Dec 16 13:11:10.984316 kubelet[3986]: I1216 13:11:10.984180 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-backend-key-pair\") pod \"whisker-5784bdfdb5-2zcqt\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " pod="calico-system/whisker-5784bdfdb5-2zcqt" Dec 16 13:11:10.984316 kubelet[3986]: I1216 13:11:10.984194 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xf4\" (UniqueName: \"kubernetes.io/projected/2c4149fe-258b-4d56-9699-3c8027ef2524-kube-api-access-q6xf4\") pod \"coredns-668d6bf9bc-q8vs8\" (UID: \"2c4149fe-258b-4d56-9699-3c8027ef2524\") " pod="kube-system/coredns-668d6bf9bc-q8vs8" Dec 16 13:11:10.984316 kubelet[3986]: I1216 13:11:10.984218 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt94f\" (UniqueName: \"kubernetes.io/projected/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-kube-api-access-jt94f\") pod \"whisker-5784bdfdb5-2zcqt\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " pod="calico-system/whisker-5784bdfdb5-2zcqt" Dec 16 13:11:10.984458 kubelet[3986]: I1216 13:11:10.984237 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9d340d2-955d-4739-b7cb-fc9a188575e3-config\") pod \"goldmane-666569f655-7sznl\" (UID: \"c9d340d2-955d-4739-b7cb-fc9a188575e3\") " pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:10.984458 kubelet[3986]: I1216 13:11:10.984257 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-ca-bundle\") pod \"whisker-5784bdfdb5-2zcqt\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " pod="calico-system/whisker-5784bdfdb5-2zcqt" Dec 16 13:11:10.984458 kubelet[3986]: I1216 13:11:10.984277 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527a715-8ea0-4d0b-a053-0e471ff72634-tigera-ca-bundle\") pod \"calico-kube-controllers-5469dcf444-55tl9\" (UID: \"2527a715-8ea0-4d0b-a053-0e471ff72634\") " pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" Dec 16 13:11:10.984458 kubelet[3986]: I1216 13:11:10.984294 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9d340d2-955d-4739-b7cb-fc9a188575e3-goldmane-ca-bundle\") pod \"goldmane-666569f655-7sznl\" (UID: \"c9d340d2-955d-4739-b7cb-fc9a188575e3\") " pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:10.984458 kubelet[3986]: I1216 13:11:10.984320 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b89c7b3-c47a-4d49-a11e-8c488d12250e-config-volume\") pod \"coredns-668d6bf9bc-skmx5\" (UID: \"9b89c7b3-c47a-4d49-a11e-8c488d12250e\") " pod="kube-system/coredns-668d6bf9bc-skmx5" Dec 16 13:11:10.984544 kubelet[3986]: I1216 13:11:10.984338 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f055798a-e699-42ea-8c24-f7896c6361d5-calico-apiserver-certs\") pod \"calico-apiserver-67f6997d77-z274w\" (UID: \"f055798a-e699-42ea-8c24-f7896c6361d5\") " pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" Dec 16 13:11:10.984544 kubelet[3986]: I1216 13:11:10.984355 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlls\" (UniqueName: \"kubernetes.io/projected/9b89c7b3-c47a-4d49-a11e-8c488d12250e-kube-api-access-kjlls\") pod \"coredns-668d6bf9bc-skmx5\" (UID: \"9b89c7b3-c47a-4d49-a11e-8c488d12250e\") " pod="kube-system/coredns-668d6bf9bc-skmx5" Dec 16 13:11:11.255181 containerd[2582]: time="2025-12-16T13:11:11.255114902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-z274w,Uid:f055798a-e699-42ea-8c24-f7896c6361d5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:11:11.490040 containerd[2582]: time="2025-12-16T13:11:11.489994087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5469dcf444-55tl9,Uid:2527a715-8ea0-4d0b-a053-0e471ff72634,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:11.501486 containerd[2582]: time="2025-12-16T13:11:11.501466138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-skmx5,Uid:9b89c7b3-c47a-4d49-a11e-8c488d12250e,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:11.506217 containerd[2582]: time="2025-12-16T13:11:11.506114167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7sznl,Uid:c9d340d2-955d-4739-b7cb-fc9a188575e3,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:11.513846 containerd[2582]: time="2025-12-16T13:11:11.513814642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8vs8,Uid:2c4149fe-258b-4d56-9699-3c8027ef2524,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:11.532468 containerd[2582]: time="2025-12-16T13:11:11.532422108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784bdfdb5-2zcqt,Uid:4125d4ae-25c4-4e66-8dbd-1294c8e7e14d,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:11.537906 containerd[2582]: time="2025-12-16T13:11:11.537887498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-8hwmz,Uid:02384784-68b8-42d3-aba8-a97ba2d37c12,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:11:11.818232 containerd[2582]: time="2025-12-16T13:11:11.818182304Z" level=error msg="Failed to destroy network for sandbox \"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.821026 systemd[1]: run-netns-cni\x2db1f085f7\x2d3c8b\x2da313\x2dfb9a\x2dc565d9367709.mount: Deactivated successfully. Dec 16 13:11:11.826271 containerd[2582]: time="2025-12-16T13:11:11.826187790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tj5zh,Uid:40ac25d7-4601-4254-b29f-0ca4ec170f77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.826692 kubelet[3986]: E1216 13:11:11.826467 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.826692 kubelet[3986]: E1216 13:11:11.826523 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:11:11.826692 kubelet[3986]: E1216 13:11:11.826543 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tj5zh" Dec 16 13:11:11.826846 kubelet[3986]: E1216 13:11:11.826583 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a6836a561e8cecbe466b902def688647de47b63157f2038f5704bb95cb661d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:11.830501 containerd[2582]: time="2025-12-16T13:11:11.830449469Z" level=error msg="Failed to destroy network for sandbox \"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.833081 systemd[1]: run-netns-cni\x2d5f4f64bd\x2d9ecb\x2d1d32\x2deab7\x2d263680406477.mount: Deactivated successfully. Dec 16 13:11:11.838813 containerd[2582]: time="2025-12-16T13:11:11.838769810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5469dcf444-55tl9,Uid:2527a715-8ea0-4d0b-a053-0e471ff72634,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.839096 kubelet[3986]: E1216 13:11:11.839028 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.839096 kubelet[3986]: E1216 13:11:11.839077 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" Dec 16 13:11:11.839185 kubelet[3986]: E1216 13:11:11.839095 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" Dec 16 13:11:11.839185 kubelet[3986]: E1216 13:11:11.839133 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"896dd0f15d44fc13641bf9e93e1aa8b85c9d0193620e58d8abda382c777696ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:11.883530 containerd[2582]: time="2025-12-16T13:11:11.882962802Z" level=error msg="Failed to destroy network for sandbox \"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.885495 systemd[1]: run-netns-cni\x2d4506c23e\x2dd8a8\x2dbb5d\x2dbd2f\x2d918e5f998cd2.mount: Deactivated successfully. Dec 16 13:11:11.898718 containerd[2582]: time="2025-12-16T13:11:11.898644762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-8hwmz,Uid:02384784-68b8-42d3-aba8-a97ba2d37c12,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.898920 kubelet[3986]: E1216 13:11:11.898880 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.899499 kubelet[3986]: E1216 13:11:11.899098 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" Dec 16 13:11:11.899499 kubelet[3986]: E1216 13:11:11.899123 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" Dec 16 13:11:11.899499 kubelet[3986]: E1216 13:11:11.899168 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"852a4736cab93d9b11d4132e5d06f838d56840d9131b9194b637394ba6da0c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:11.928579 containerd[2582]: time="2025-12-16T13:11:11.928549681Z" level=error msg="Failed to destroy network for sandbox \"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.931043 systemd[1]: run-netns-cni\x2db415b570\x2da241\x2d9cce\x2ded74\x2df7f8b62df6d7.mount: Deactivated successfully. Dec 16 13:11:11.937148 containerd[2582]: time="2025-12-16T13:11:11.937102340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8vs8,Uid:2c4149fe-258b-4d56-9699-3c8027ef2524,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.937974 kubelet[3986]: E1216 13:11:11.937948 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.938048 kubelet[3986]: E1216 13:11:11.937990 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q8vs8" Dec 16 13:11:11.938048 kubelet[3986]: E1216 13:11:11.938014 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q8vs8" Dec 16 13:11:11.938098 kubelet[3986]: E1216 13:11:11.938046 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q8vs8_kube-system(2c4149fe-258b-4d56-9699-3c8027ef2524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q8vs8_kube-system(2c4149fe-258b-4d56-9699-3c8027ef2524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"762ba033642306a02084bde8392e8f8d36ee59cc7e98c60faf6a5339c91fa7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q8vs8" podUID="2c4149fe-258b-4d56-9699-3c8027ef2524" Dec 16 13:11:11.939859 containerd[2582]: time="2025-12-16T13:11:11.939822867Z" level=error msg="Failed to destroy network for sandbox \"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.947621 containerd[2582]: time="2025-12-16T13:11:11.947597153Z" level=error msg="Failed to destroy network for sandbox \"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.948165 containerd[2582]: time="2025-12-16T13:11:11.948142677Z" level=error msg="Failed to destroy network for sandbox \"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.948528 containerd[2582]: time="2025-12-16T13:11:11.948456540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784bdfdb5-2zcqt,Uid:4125d4ae-25c4-4e66-8dbd-1294c8e7e14d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.948626 kubelet[3986]: E1216 13:11:11.948594 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.948658 kubelet[3986]: E1216 13:11:11.948633 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5784bdfdb5-2zcqt" Dec 16 13:11:11.948658 kubelet[3986]: E1216 13:11:11.948652 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5784bdfdb5-2zcqt" Dec 16 13:11:11.948963 kubelet[3986]: E1216 13:11:11.948680 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5784bdfdb5-2zcqt_calico-system(4125d4ae-25c4-4e66-8dbd-1294c8e7e14d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5784bdfdb5-2zcqt_calico-system(4125d4ae-25c4-4e66-8dbd-1294c8e7e14d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e824ae6136d003839835ea4f227401aebd7b41e4db0cc775d1dbd7b8dd5a3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5784bdfdb5-2zcqt" podUID="4125d4ae-25c4-4e66-8dbd-1294c8e7e14d" Dec 16 13:11:11.949436 containerd[2582]: time="2025-12-16T13:11:11.949411473Z" level=error msg="Failed to destroy network for sandbox \"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.958932 containerd[2582]: time="2025-12-16T13:11:11.958898704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7sznl,Uid:c9d340d2-955d-4739-b7cb-fc9a188575e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.959094 kubelet[3986]: E1216 13:11:11.959073 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.959135 kubelet[3986]: E1216 13:11:11.959113 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:11.959167 kubelet[3986]: E1216 13:11:11.959130 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7sznl" Dec 16 13:11:11.959188 kubelet[3986]: E1216 13:11:11.959160 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d89657d2653bc48894bb88335552360bfaa4d6d7ca45012a02b440fc563ded4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:11.961532 containerd[2582]: time="2025-12-16T13:11:11.961500213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-skmx5,Uid:9b89c7b3-c47a-4d49-a11e-8c488d12250e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.961917 kubelet[3986]: E1216 13:11:11.961895 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.961994 kubelet[3986]: E1216 13:11:11.961924 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-skmx5" Dec 16 13:11:11.961994 kubelet[3986]: E1216 13:11:11.961950 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-skmx5" Dec 16 13:11:11.961994 kubelet[3986]: E1216 13:11:11.961981 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-skmx5_kube-system(9b89c7b3-c47a-4d49-a11e-8c488d12250e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-skmx5_kube-system(9b89c7b3-c47a-4d49-a11e-8c488d12250e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db709a5dc66d402554314145a827394f2b41c3d86ec7b97088048074e30fc51b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-skmx5" podUID="9b89c7b3-c47a-4d49-a11e-8c488d12250e" Dec 16 13:11:11.964207 containerd[2582]: time="2025-12-16T13:11:11.964179029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-z274w,Uid:f055798a-e699-42ea-8c24-f7896c6361d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.964346 kubelet[3986]: E1216 13:11:11.964324 3986 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:11:11.964389 kubelet[3986]: E1216 13:11:11.964373 3986 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" Dec 16 13:11:11.964420 kubelet[3986]: E1216 13:11:11.964390 3986 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" Dec 16 13:11:11.964444 kubelet[3986]: E1216 13:11:11.964422 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d9449cc40ad6ba17b7da60ae930c39b1f009ebb526c4a036647eacbd7f42ccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:12.076124 containerd[2582]: time="2025-12-16T13:11:12.074892403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:11:12.787199 systemd[1]: run-netns-cni\x2d09d58667\x2d728e\x2dd396\x2d092a\x2dbdd4010ecbf8.mount: Deactivated successfully. Dec 16 13:11:12.787278 systemd[1]: run-netns-cni\x2d977d795d\x2d4531\x2d61c1\x2d66f0\x2d9036b6f9773a.mount: Deactivated successfully. Dec 16 13:11:12.787327 systemd[1]: run-netns-cni\x2d2b755b3e\x2ddf52\x2dfaad\x2d5352\x2d00bced2353d9.mount: Deactivated successfully. Dec 16 13:11:12.787372 systemd[1]: run-netns-cni\x2d99fadf8c\x2dd34f\x2d757a\x2d1907\x2d62537878a52d.mount: Deactivated successfully. Dec 16 13:11:19.793667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424571846.mount: Deactivated successfully. Dec 16 13:11:19.816632 containerd[2582]: time="2025-12-16T13:11:19.816592491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:19.819036 containerd[2582]: time="2025-12-16T13:11:19.818992770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:11:19.822014 containerd[2582]: time="2025-12-16T13:11:19.821983867Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:19.825106 containerd[2582]: time="2025-12-16T13:11:19.825065460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:11:19.825555 containerd[2582]: time="2025-12-16T13:11:19.825295507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.750377606s" Dec 16 13:11:19.825555 containerd[2582]: time="2025-12-16T13:11:19.825322110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:11:19.836931 containerd[2582]: time="2025-12-16T13:11:19.836905262Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:11:19.861669 containerd[2582]: time="2025-12-16T13:11:19.861633928Z" level=info msg="Container f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:19.878743 containerd[2582]: time="2025-12-16T13:11:19.878720051Z" level=info msg="CreateContainer within sandbox \"ff9881e5316dd20a42d8e578436e998829a189fb3fcca56cf0a5dcee645ff029\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f\"" Dec 16 13:11:19.879291 containerd[2582]: time="2025-12-16T13:11:19.879230122Z" level=info msg="StartContainer for \"f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f\"" Dec 16 13:11:19.880658 containerd[2582]: time="2025-12-16T13:11:19.880633445Z" level=info msg="connecting to shim f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f" address="unix:///run/containerd/s/5599ba4e756dd251531a8988cdd66e872edf2004e5d492c03cb3948ee2876117" protocol=ttrpc version=3 Dec 16 13:11:19.899872 systemd[1]: Started cri-containerd-f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f.scope - libcontainer container f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f. Dec 16 13:11:19.933000 audit: BPF prog-id=196 op=LOAD Dec 16 13:11:19.937110 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:11:19.937188 kernel: audit: type=1334 audit(1765890679.933:598): prog-id=196 op=LOAD Dec 16 13:11:19.937637 kernel: audit: type=1300 audit(1765890679.933:598): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.933000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.943779 kernel: audit: type=1327 audit(1765890679.933:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.935000 audit: BPF prog-id=197 op=LOAD Dec 16 13:11:19.935000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.948935 kernel: audit: type=1334 audit(1765890679.935:599): prog-id=197 op=LOAD Dec 16 13:11:19.948976 kernel: audit: type=1300 audit(1765890679.935:599): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.952797 kernel: audit: type=1327 audit(1765890679.935:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.935000 audit: BPF prog-id=197 op=UNLOAD Dec 16 13:11:19.960385 kernel: audit: type=1334 audit(1765890679.935:600): prog-id=197 op=UNLOAD Dec 16 13:11:19.935000 audit[5058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.975463 kernel: audit: type=1300 audit(1765890679.935:600): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.975554 kernel: audit: type=1327 audit(1765890679.935:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.976728 kernel: audit: type=1334 audit(1765890679.935:601): prog-id=196 op=UNLOAD Dec 16 13:11:19.935000 audit: BPF prog-id=196 op=UNLOAD Dec 16 13:11:19.935000 audit[5058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.935000 audit: BPF prog-id=198 op=LOAD Dec 16 13:11:19.935000 audit[5058]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4559 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:19.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636363764363134653736643165623432353662316466316666386432 Dec 16 13:11:19.988653 containerd[2582]: time="2025-12-16T13:11:19.988622492Z" level=info msg="StartContainer for \"f667d614e76d1eb4256b1df1ff8d2c30c822d2d0069ab70b66f04dd36e3a444f\" returns successfully" Dec 16 13:11:20.108248 kubelet[3986]: I1216 13:11:20.108201 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s7774" podStartSLOduration=1.827322252 podStartE2EDuration="21.108183968s" podCreationTimestamp="2025-12-16 13:10:59 +0000 UTC" firstStartedPulling="2025-12-16 13:11:00.545035202 +0000 UTC m=+21.660577116" lastFinishedPulling="2025-12-16 13:11:19.825896921 +0000 UTC m=+40.941438832" observedRunningTime="2025-12-16 13:11:20.107474142 +0000 UTC m=+41.223016061" watchObservedRunningTime="2025-12-16 13:11:20.108183968 +0000 UTC m=+41.223725887" Dec 16 13:11:20.519972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:11:20.520043 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:11:20.732603 kubelet[3986]: I1216 13:11:20.732577 3986 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-ca-bundle\") pod \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " Dec 16 13:11:20.732726 kubelet[3986]: I1216 13:11:20.732620 3986 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-backend-key-pair\") pod \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " Dec 16 13:11:20.732726 kubelet[3986]: I1216 13:11:20.732639 3986 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt94f\" (UniqueName: \"kubernetes.io/projected/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-kube-api-access-jt94f\") pod \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\" (UID: \"4125d4ae-25c4-4e66-8dbd-1294c8e7e14d\") " Dec 16 13:11:20.733239 kubelet[3986]: I1216 13:11:20.733209 3986 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d" (UID: "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:11:20.737377 kubelet[3986]: I1216 13:11:20.737149 3986 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d" (UID: "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:11:20.739178 kubelet[3986]: I1216 13:11:20.739134 3986 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-kube-api-access-jt94f" (OuterVolumeSpecName: "kube-api-access-jt94f") pod "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d" (UID: "4125d4ae-25c4-4e66-8dbd-1294c8e7e14d"). InnerVolumeSpecName "kube-api-access-jt94f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:11:20.794336 systemd[1]: var-lib-kubelet-pods-4125d4ae\x2d25c4\x2d4e66\x2d8dbd\x2d1294c8e7e14d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djt94f.mount: Deactivated successfully. Dec 16 13:11:20.794416 systemd[1]: var-lib-kubelet-pods-4125d4ae\x2d25c4\x2d4e66\x2d8dbd\x2d1294c8e7e14d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:11:20.833831 kubelet[3986]: I1216 13:11:20.833783 3986 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-e647365c22\" DevicePath \"\"" Dec 16 13:11:20.833831 kubelet[3986]: I1216 13:11:20.833804 3986 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jt94f\" (UniqueName: \"kubernetes.io/projected/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-kube-api-access-jt94f\") on node \"ci-4547.0.0-a-e647365c22\" DevicePath \"\"" Dec 16 13:11:20.833831 kubelet[3986]: I1216 13:11:20.833814 3986 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d-whisker-ca-bundle\") on node \"ci-4547.0.0-a-e647365c22\" DevicePath \"\"" Dec 16 13:11:20.978137 systemd[1]: Removed slice kubepods-besteffort-pod4125d4ae_25c4_4e66_8dbd_1294c8e7e14d.slice - libcontainer container kubepods-besteffort-pod4125d4ae_25c4_4e66_8dbd_1294c8e7e14d.slice. Dec 16 13:11:21.163107 systemd[1]: Created slice kubepods-besteffort-pod58631ddf_c520_4b8f_9eb4_5eeeca2898ef.slice - libcontainer container kubepods-besteffort-pod58631ddf_c520_4b8f_9eb4_5eeeca2898ef.slice. Dec 16 13:11:21.236079 kubelet[3986]: I1216 13:11:21.236056 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxfq\" (UniqueName: \"kubernetes.io/projected/58631ddf-c520-4b8f-9eb4-5eeeca2898ef-kube-api-access-fqxfq\") pod \"whisker-6fc5cb4685-dj4s5\" (UID: \"58631ddf-c520-4b8f-9eb4-5eeeca2898ef\") " pod="calico-system/whisker-6fc5cb4685-dj4s5" Dec 16 13:11:21.236335 kubelet[3986]: I1216 13:11:21.236084 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58631ddf-c520-4b8f-9eb4-5eeeca2898ef-whisker-backend-key-pair\") pod \"whisker-6fc5cb4685-dj4s5\" (UID: \"58631ddf-c520-4b8f-9eb4-5eeeca2898ef\") " pod="calico-system/whisker-6fc5cb4685-dj4s5" Dec 16 13:11:21.236335 kubelet[3986]: I1216 13:11:21.236114 3986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58631ddf-c520-4b8f-9eb4-5eeeca2898ef-whisker-ca-bundle\") pod \"whisker-6fc5cb4685-dj4s5\" (UID: \"58631ddf-c520-4b8f-9eb4-5eeeca2898ef\") " pod="calico-system/whisker-6fc5cb4685-dj4s5" Dec 16 13:11:21.467684 containerd[2582]: time="2025-12-16T13:11:21.467617280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc5cb4685-dj4s5,Uid:58631ddf-c520-4b8f-9eb4-5eeeca2898ef,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:21.613997 systemd-networkd[2197]: cali33b22c4ccac: Link UP Dec 16 13:11:21.615210 systemd-networkd[2197]: cali33b22c4ccac: Gained carrier Dec 16 13:11:21.627151 containerd[2582]: 2025-12-16 13:11:21.500 [INFO][5174] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:11:21.627151 containerd[2582]: 2025-12-16 13:11:21.507 [INFO][5174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0 whisker-6fc5cb4685- calico-system 58631ddf-c520-4b8f-9eb4-5eeeca2898ef 870 0 2025-12-16 13:11:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fc5cb4685 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 whisker-6fc5cb4685-dj4s5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali33b22c4ccac [] [] }} ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-" Dec 16 13:11:21.627151 containerd[2582]: 2025-12-16 13:11:21.507 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627151 containerd[2582]: 2025-12-16 13:11:21.524 [INFO][5187] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" HandleID="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Workload="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.524 [INFO][5187] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" HandleID="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Workload="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"whisker-6fc5cb4685-dj4s5", "timestamp":"2025-12-16 13:11:21.52468423 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.524 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.524 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.524 [INFO][5187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.528 [INFO][5187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.531 [INFO][5187] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.535 [INFO][5187] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.537 [INFO][5187] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627340 containerd[2582]: 2025-12-16 13:11:21.538 [INFO][5187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.538 [INFO][5187] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.539 [INFO][5187] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.542 [INFO][5187] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.549 [INFO][5187] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.129/26] block=192.168.64.128/26 handle="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.549 [INFO][5187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.129/26] handle="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.549 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:21.627506 containerd[2582]: 2025-12-16 13:11:21.549 [INFO][5187] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.129/26] IPv6=[] ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" HandleID="k8s-pod-network.3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Workload="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627595 containerd[2582]: 2025-12-16 13:11:21.551 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0", GenerateName:"whisker-6fc5cb4685-", Namespace:"calico-system", SelfLink:"", UID:"58631ddf-c520-4b8f-9eb4-5eeeca2898ef", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 11, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc5cb4685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"whisker-6fc5cb4685-dj4s5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33b22c4ccac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:21.627595 containerd[2582]: 2025-12-16 13:11:21.551 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.129/32] ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627646 containerd[2582]: 2025-12-16 13:11:21.552 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33b22c4ccac ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627646 containerd[2582]: 2025-12-16 13:11:21.614 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.627671 containerd[2582]: 2025-12-16 13:11:21.615 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0", GenerateName:"whisker-6fc5cb4685-", Namespace:"calico-system", SelfLink:"", UID:"58631ddf-c520-4b8f-9eb4-5eeeca2898ef", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 11, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fc5cb4685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe", Pod:"whisker-6fc5cb4685-dj4s5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33b22c4ccac", MAC:"4e:d2:81:67:ff:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:21.627740 containerd[2582]: 2025-12-16 13:11:21.624 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" Namespace="calico-system" Pod="whisker-6fc5cb4685-dj4s5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-whisker--6fc5cb4685--dj4s5-eth0" Dec 16 13:11:21.659856 containerd[2582]: time="2025-12-16T13:11:21.659799872Z" level=info msg="connecting to shim 3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe" address="unix:///run/containerd/s/63c7b8be10b3fc2229c6bc8ab81992d04c1f60a08a540796dac310b5d8818f37" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:21.678846 systemd[1]: Started cri-containerd-3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe.scope - libcontainer container 3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe. Dec 16 13:11:21.685000 audit: BPF prog-id=199 op=LOAD Dec 16 13:11:21.686000 audit: BPF prog-id=200 op=LOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=201 op=LOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=202 op=LOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=202 op=UNLOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=201 op=UNLOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.686000 audit: BPF prog-id=203 op=LOAD Dec 16 13:11:21.686000 audit[5222]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5210 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:21.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333661363961363533366462393065656665626537613432306538 Dec 16 13:11:21.714572 containerd[2582]: time="2025-12-16T13:11:21.714543416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc5cb4685-dj4s5,Uid:58631ddf-c520-4b8f-9eb4-5eeeca2898ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b36a69a6536db90eefebe7a420e880ea06b0af2cdaed30fb801c9dba92a0afe\"" Dec 16 13:11:21.715932 containerd[2582]: time="2025-12-16T13:11:21.715911359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:11:21.966805 containerd[2582]: time="2025-12-16T13:11:21.966779586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:21.969828 containerd[2582]: time="2025-12-16T13:11:21.969795800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:11:21.969898 containerd[2582]: time="2025-12-16T13:11:21.969867061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:21.970081 kubelet[3986]: E1216 13:11:21.970011 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:11:21.970081 kubelet[3986]: E1216 13:11:21.970068 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:11:21.970331 kubelet[3986]: E1216 13:11:21.970302 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:790b37891fd14fdf909b39bfc1c2dcaf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:21.973129 containerd[2582]: time="2025-12-16T13:11:21.973105774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:11:22.039000 audit: BPF prog-id=204 op=LOAD Dec 16 13:11:22.039000 audit[5347]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcab03bf90 a2=98 a3=1fffffffffffffff items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.039000 audit: BPF prog-id=204 op=UNLOAD Dec 16 13:11:22.039000 audit[5347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcab03bf60 a3=0 items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.039000 audit: BPF prog-id=205 op=LOAD Dec 16 13:11:22.039000 audit[5347]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcab03be70 a2=94 a3=3 items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.039000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:11:22.039000 audit[5347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcab03be70 a2=94 a3=3 items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.039000 audit: BPF prog-id=206 op=LOAD Dec 16 13:11:22.039000 audit[5347]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcab03beb0 a2=94 a3=7ffcab03c090 items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.039000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.040000 audit: BPF prog-id=206 op=UNLOAD Dec 16 13:11:22.040000 audit[5347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcab03beb0 a2=94 a3=7ffcab03c090 items=0 ppid=5258 pid=5347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.040000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:11:22.046000 audit: BPF prog-id=207 op=LOAD Dec 16 13:11:22.046000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1ffee440 a2=98 a3=3 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.046000 audit: BPF prog-id=207 op=UNLOAD Dec 16 13:11:22.046000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1ffee410 a3=0 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.047000 audit: BPF prog-id=208 op=LOAD Dec 16 13:11:22.047000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1ffee230 a2=94 a3=54428f items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.047000 audit: BPF prog-id=208 op=UNLOAD Dec 16 13:11:22.047000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1ffee230 a2=94 a3=54428f items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.047000 audit: BPF prog-id=209 op=LOAD Dec 16 13:11:22.047000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1ffee260 a2=94 a3=2 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.047000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:11:22.047000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1ffee260 a2=0 a3=2 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.196000 audit: BPF prog-id=210 op=LOAD Dec 16 13:11:22.196000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1ffee120 a2=94 a3=1 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.196000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.196000 audit: BPF prog-id=210 op=UNLOAD Dec 16 13:11:22.196000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1ffee120 a2=94 a3=1 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.196000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.203000 audit: BPF prog-id=211 op=LOAD Dec 16 13:11:22.203000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1ffee110 a2=94 a3=4 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.203000 audit: BPF prog-id=211 op=UNLOAD Dec 16 13:11:22.203000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1ffee110 a2=0 a3=4 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=212 op=LOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1ffedf70 a2=94 a3=5 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=212 op=UNLOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff1ffedf70 a2=0 a3=5 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=213 op=LOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1ffee190 a2=94 a3=6 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1ffee190 a2=0 a3=6 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=214 op=LOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1ffed940 a2=94 a3=88 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=215 op=LOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff1ffed7c0 a2=94 a3=2 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff1ffed7f0 a2=0 a3=7fff1ffed8f0 items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.204000 audit: BPF prog-id=214 op=UNLOAD Dec 16 13:11:22.204000 audit[5348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1331d10 a2=0 a3=bc507506cae4c6ae items=0 ppid=5258 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.204000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:11:22.210000 audit: BPF prog-id=216 op=LOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0ec43000 a2=98 a3=1999999999999999 items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.210000 audit: BPF prog-id=216 op=UNLOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe0ec42fd0 a3=0 items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.210000 audit: BPF prog-id=217 op=LOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0ec42ee0 a2=94 a3=ffff items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.210000 audit: BPF prog-id=217 op=UNLOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe0ec42ee0 a2=94 a3=ffff items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.210000 audit: BPF prog-id=218 op=LOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0ec42f20 a2=94 a3=7ffe0ec43100 items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.210000 audit: BPF prog-id=218 op=UNLOAD Dec 16 13:11:22.210000 audit[5371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe0ec42f20 a2=94 a3=7ffe0ec43100 items=0 ppid=5258 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.210000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:11:22.216852 containerd[2582]: time="2025-12-16T13:11:22.216772187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:22.219980 containerd[2582]: time="2025-12-16T13:11:22.219915898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:11:22.219980 containerd[2582]: time="2025-12-16T13:11:22.219942732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:22.220105 kubelet[3986]: E1216 13:11:22.220066 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:11:22.220157 kubelet[3986]: E1216 13:11:22.220114 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:11:22.220239 kubelet[3986]: E1216 13:11:22.220209 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:22.221500 kubelet[3986]: E1216 13:11:22.221469 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:11:22.323950 systemd-networkd[2197]: vxlan.calico: Link UP Dec 16 13:11:22.323955 systemd-networkd[2197]: vxlan.calico: Gained carrier Dec 16 13:11:22.341000 audit: BPF prog-id=219 op=LOAD Dec 16 13:11:22.341000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff08bccd30 a2=98 a3=0 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.341000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.342000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:11:22.342000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff08bccd00 a3=0 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.342000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.342000 audit: BPF prog-id=220 op=LOAD Dec 16 13:11:22.342000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff08bccb40 a2=94 a3=54428f items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.342000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.342000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:11:22.342000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff08bccb40 a2=94 a3=54428f items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.342000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.342000 audit: BPF prog-id=221 op=LOAD Dec 16 13:11:22.342000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff08bccb70 a2=94 a3=2 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.342000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.342000 audit: BPF prog-id=221 op=UNLOAD Dec 16 13:11:22.342000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff08bccb70 a2=0 a3=2 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.342000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=222 op=LOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff08bcc920 a2=94 a3=4 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=222 op=UNLOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff08bcc920 a2=94 a3=4 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=223 op=LOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff08bcca20 a2=94 a3=7fff08bccba0 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=223 op=UNLOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff08bcca20 a2=0 a3=7fff08bccba0 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=224 op=LOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff08bcc150 a2=94 a3=2 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff08bcc150 a2=0 a3=2 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.343000 audit: BPF prog-id=225 op=LOAD Dec 16 13:11:22.343000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff08bcc250 a2=94 a3=30 items=0 ppid=5258 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.343000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:11:22.353000 audit: BPF prog-id=226 op=LOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff588eef30 a2=98 a3=0 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.353000 audit: BPF prog-id=226 op=UNLOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff588eef00 a3=0 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.353000 audit: BPF prog-id=227 op=LOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff588eed20 a2=94 a3=54428f items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.353000 audit: BPF prog-id=227 op=UNLOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff588eed20 a2=94 a3=54428f items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.353000 audit: BPF prog-id=228 op=LOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff588eed50 a2=94 a3=2 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.353000 audit: BPF prog-id=228 op=UNLOAD Dec 16 13:11:22.353000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff588eed50 a2=0 a3=2 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.461000 audit: BPF prog-id=229 op=LOAD Dec 16 13:11:22.461000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff588eec10 a2=94 a3=1 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.461000 audit: BPF prog-id=229 op=UNLOAD Dec 16 13:11:22.461000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff588eec10 a2=94 a3=1 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.469000 audit: BPF prog-id=230 op=LOAD Dec 16 13:11:22.469000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff588eec00 a2=94 a3=4 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.469000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:11:22.469000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff588eec00 a2=0 a3=4 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.469000 audit: BPF prog-id=231 op=LOAD Dec 16 13:11:22.469000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff588eea60 a2=94 a3=5 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.469000 audit: BPF prog-id=231 op=UNLOAD Dec 16 13:11:22.469000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff588eea60 a2=0 a3=5 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.469000 audit: BPF prog-id=232 op=LOAD Dec 16 13:11:22.469000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff588eec80 a2=94 a3=6 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.470000 audit: BPF prog-id=232 op=UNLOAD Dec 16 13:11:22.470000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff588eec80 a2=0 a3=6 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.470000 audit: BPF prog-id=233 op=LOAD Dec 16 13:11:22.470000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff588ee430 a2=94 a3=88 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.470000 audit: BPF prog-id=234 op=LOAD Dec 16 13:11:22.470000 audit[5402]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff588ee2b0 a2=94 a3=2 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.470000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:11:22.470000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff588ee2e0 a2=0 a3=7fff588ee3e0 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.470000 audit: BPF prog-id=233 op=UNLOAD Dec 16 13:11:22.470000 audit[5402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a032d10 a2=0 a3=4d66cb69862837c7 items=0 ppid=5258 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:11:22.478000 audit: BPF prog-id=225 op=UNLOAD Dec 16 13:11:22.478000 audit[5258]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000df2240 a2=0 a3=0 items=0 ppid=5252 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.478000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:11:22.602000 audit[5425]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:22.602000 audit[5425]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc146f1710 a2=0 a3=7ffc146f16fc items=0 ppid=5258 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:22.603000 audit[5426]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:22.603000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffefd0a6080 a2=0 a3=7ffefd0a606c items=0 ppid=5258 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.603000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:22.631000 audit[5422]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:22.631000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe640f5660 a2=0 a3=7ffe640f564c items=0 ppid=5258 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.631000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:22.641000 audit[5427]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5427 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:22.641000 audit[5427]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd60c28c20 a2=0 a3=562fcd910000 items=0 ppid=5258 pid=5427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:22.641000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:22.974889 containerd[2582]: time="2025-12-16T13:11:22.974837351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-8hwmz,Uid:02384784-68b8-42d3-aba8-a97ba2d37c12,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:11:22.975669 containerd[2582]: time="2025-12-16T13:11:22.975287194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5469dcf444-55tl9,Uid:2527a715-8ea0-4d0b-a053-0e471ff72634,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:22.976583 kubelet[3986]: I1216 13:11:22.976559 3986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4125d4ae-25c4-4e66-8dbd-1294c8e7e14d" path="/var/lib/kubelet/pods/4125d4ae-25c4-4e66-8dbd-1294c8e7e14d/volumes" Dec 16 13:11:22.980836 systemd-networkd[2197]: cali33b22c4ccac: Gained IPv6LL Dec 16 13:11:23.094919 kubelet[3986]: E1216 13:11:23.094884 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:11:23.105255 systemd-networkd[2197]: cali8aafe92ec5e: Link UP Dec 16 13:11:23.106292 systemd-networkd[2197]: cali8aafe92ec5e: Gained carrier Dec 16 13:11:23.128097 containerd[2582]: 2025-12-16 13:11:23.036 [INFO][5440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0 calico-apiserver-67f6997d77- calico-apiserver 02384784-68b8-42d3-aba8-a97ba2d37c12 796 0 2025-12-16 13:10:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f6997d77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 calico-apiserver-67f6997d77-8hwmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8aafe92ec5e [] [] }} ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-" Dec 16 13:11:23.128097 containerd[2582]: 2025-12-16 13:11:23.036 [INFO][5440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.128097 containerd[2582]: 2025-12-16 13:11:23.063 [INFO][5464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" HandleID="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.063 [INFO][5464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" HandleID="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-e647365c22", "pod":"calico-apiserver-67f6997d77-8hwmz", "timestamp":"2025-12-16 13:11:23.063504188 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.063 [INFO][5464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.063 [INFO][5464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.063 [INFO][5464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.067 [INFO][5464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.070 [INFO][5464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.073 [INFO][5464] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.074 [INFO][5464] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128247 containerd[2582]: 2025-12-16 13:11:23.076 [INFO][5464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.076 [INFO][5464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.077 [INFO][5464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7 Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.080 [INFO][5464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.130/26] block=192.168.64.128/26 handle="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.130/26] handle="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:23.128971 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.130/26] IPv6=[] ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" HandleID="k8s-pod-network.213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.129200 containerd[2582]: 2025-12-16 13:11:23.102 [INFO][5440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0", GenerateName:"calico-apiserver-67f6997d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"02384784-68b8-42d3-aba8-a97ba2d37c12", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6997d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"calico-apiserver-67f6997d77-8hwmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aafe92ec5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:23.129273 containerd[2582]: 2025-12-16 13:11:23.102 [INFO][5440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.130/32] ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.129273 containerd[2582]: 2025-12-16 13:11:23.102 [INFO][5440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aafe92ec5e ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.129273 containerd[2582]: 2025-12-16 13:11:23.105 [INFO][5440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.129340 containerd[2582]: 2025-12-16 13:11:23.106 [INFO][5440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0", GenerateName:"calico-apiserver-67f6997d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"02384784-68b8-42d3-aba8-a97ba2d37c12", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6997d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7", Pod:"calico-apiserver-67f6997d77-8hwmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aafe92ec5e", MAC:"8e:f6:a3:cd:e1:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:23.129393 containerd[2582]: 2025-12-16 13:11:23.125 [INFO][5440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-8hwmz" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--8hwmz-eth0" Dec 16 13:11:23.137000 audit[5485]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:23.137000 audit[5485]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd95dc57c0 a2=0 a3=7ffd95dc57ac items=0 ppid=4121 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:23.142000 audit[5485]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:23.142000 audit[5485]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd95dc57c0 a2=0 a3=0 items=0 ppid=4121 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:23.146000 audit[5486]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5486 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:23.146000 audit[5486]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe909b9320 a2=0 a3=7ffe909b930c items=0 ppid=5258 pid=5486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.146000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:23.171969 containerd[2582]: time="2025-12-16T13:11:23.171902024Z" level=info msg="connecting to shim 213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7" address="unix:///run/containerd/s/16b856a9531e8424c2721611f0781b0252439ecd95fafcc7bdf54cde29f5a90d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:23.205901 systemd[1]: Started cri-containerd-213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7.scope - libcontainer container 213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7. Dec 16 13:11:23.211633 systemd-networkd[2197]: calib2ac01e202e: Link UP Dec 16 13:11:23.213309 systemd-networkd[2197]: calib2ac01e202e: Gained carrier Dec 16 13:11:23.223000 audit: BPF prog-id=235 op=LOAD Dec 16 13:11:23.223000 audit: BPF prog-id=236 op=LOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=236 op=UNLOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=237 op=LOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=238 op=LOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=238 op=UNLOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=237 op=UNLOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.223000 audit: BPF prog-id=239 op=LOAD Dec 16 13:11:23.223000 audit[5507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5496 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231336231343639326133323761343834653464316436633839643365 Dec 16 13:11:23.232430 containerd[2582]: 2025-12-16 13:11:23.037 [INFO][5450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0 calico-kube-controllers-5469dcf444- calico-system 2527a715-8ea0-4d0b-a053-0e471ff72634 793 0 2025-12-16 13:10:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5469dcf444 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 calico-kube-controllers-5469dcf444-55tl9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib2ac01e202e [] [] }} ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-" Dec 16 13:11:23.232430 containerd[2582]: 2025-12-16 13:11:23.037 [INFO][5450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.232430 containerd[2582]: 2025-12-16 13:11:23.064 [INFO][5467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" HandleID="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.065 [INFO][5467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" HandleID="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"calico-kube-controllers-5469dcf444-55tl9", "timestamp":"2025-12-16 13:11:23.064577347 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.065 [INFO][5467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.100 [INFO][5467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.169 [INFO][5467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.174 [INFO][5467] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.179 [INFO][5467] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.186 [INFO][5467] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232547 containerd[2582]: 2025-12-16 13:11:23.188 [INFO][5467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.188 [INFO][5467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.189 [INFO][5467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.193 [INFO][5467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.204 [INFO][5467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.131/26] block=192.168.64.128/26 handle="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.204 [INFO][5467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.131/26] handle="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.204 [INFO][5467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:23.232896 containerd[2582]: 2025-12-16 13:11:23.204 [INFO][5467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.131/26] IPv6=[] ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" HandleID="k8s-pod-network.6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.233000 containerd[2582]: 2025-12-16 13:11:23.207 [INFO][5450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0", GenerateName:"calico-kube-controllers-5469dcf444-", Namespace:"calico-system", SelfLink:"", UID:"2527a715-8ea0-4d0b-a053-0e471ff72634", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5469dcf444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"calico-kube-controllers-5469dcf444-55tl9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2ac01e202e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:23.233051 containerd[2582]: 2025-12-16 13:11:23.207 [INFO][5450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.131/32] ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.233051 containerd[2582]: 2025-12-16 13:11:23.207 [INFO][5450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2ac01e202e ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.233051 containerd[2582]: 2025-12-16 13:11:23.214 [INFO][5450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.233093 containerd[2582]: 2025-12-16 13:11:23.215 [INFO][5450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0", GenerateName:"calico-kube-controllers-5469dcf444-", Namespace:"calico-system", SelfLink:"", UID:"2527a715-8ea0-4d0b-a053-0e471ff72634", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5469dcf444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee", Pod:"calico-kube-controllers-5469dcf444-55tl9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2ac01e202e", MAC:"ea:75:4e:7c:e8:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:23.233300 containerd[2582]: 2025-12-16 13:11:23.231 [INFO][5450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" Namespace="calico-system" Pod="calico-kube-controllers-5469dcf444-55tl9" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--kube--controllers--5469dcf444--55tl9-eth0" Dec 16 13:11:23.245000 audit[5534]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=5534 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:23.245000 audit[5534]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffd1a25f9e0 a2=0 a3=7ffd1a25f9cc items=0 ppid=5258 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.245000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:23.267256 containerd[2582]: time="2025-12-16T13:11:23.267237026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-8hwmz,Uid:02384784-68b8-42d3-aba8-a97ba2d37c12,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"213b14692a327a484e4d1d6c89d3ebcc7459f15311253779c1e095131c4420a7\"" Dec 16 13:11:23.270311 containerd[2582]: time="2025-12-16T13:11:23.270014351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:11:23.277377 containerd[2582]: time="2025-12-16T13:11:23.277348227Z" level=info msg="connecting to shim 6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee" address="unix:///run/containerd/s/c9e6d2ed567e2826f4fbb32c74b245c8228acb74ab3acf8f305363a0c31156b0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:23.299863 systemd[1]: Started cri-containerd-6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee.scope - libcontainer container 6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee. Dec 16 13:11:23.306000 audit: BPF prog-id=240 op=LOAD Dec 16 13:11:23.306000 audit: BPF prog-id=241 op=LOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=242 op=LOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=243 op=LOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=243 op=UNLOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=242 op=UNLOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.306000 audit: BPF prog-id=244 op=LOAD Dec 16 13:11:23.306000 audit[5561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5549 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:23.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638313563323032363765663762303963373037343264373563326537 Dec 16 13:11:23.343255 containerd[2582]: time="2025-12-16T13:11:23.343228407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5469dcf444-55tl9,Uid:2527a715-8ea0-4d0b-a053-0e471ff72634,Namespace:calico-system,Attempt:0,} returns sandbox id \"6815c20267ef7b09c70742d75c2e7c1c4aa35a1bc9c9f8c0020f177c118bb0ee\"" Dec 16 13:11:23.364775 systemd-networkd[2197]: vxlan.calico: Gained IPv6LL Dec 16 13:11:23.518348 containerd[2582]: time="2025-12-16T13:11:23.518290543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:23.521179 containerd[2582]: time="2025-12-16T13:11:23.521152229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:11:23.521269 containerd[2582]: time="2025-12-16T13:11:23.521200836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:23.521335 kubelet[3986]: E1216 13:11:23.521300 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:23.521373 kubelet[3986]: E1216 13:11:23.521345 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:23.522260 containerd[2582]: time="2025-12-16T13:11:23.521564210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:11:23.522309 kubelet[3986]: E1216 13:11:23.521554 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ftln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:23.523617 kubelet[3986]: E1216 13:11:23.523597 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:23.758568 containerd[2582]: time="2025-12-16T13:11:23.758529104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:23.761429 containerd[2582]: time="2025-12-16T13:11:23.761393711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:11:23.761477 containerd[2582]: time="2025-12-16T13:11:23.761443888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:23.761552 kubelet[3986]: E1216 13:11:23.761528 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:11:23.761595 kubelet[3986]: E1216 13:11:23.761561 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:11:23.761714 kubelet[3986]: E1216 13:11:23.761668 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59k48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:23.762998 kubelet[3986]: E1216 13:11:23.762962 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:23.975054 containerd[2582]: time="2025-12-16T13:11:23.974905844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-skmx5,Uid:9b89c7b3-c47a-4d49-a11e-8c488d12250e,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:23.975054 containerd[2582]: time="2025-12-16T13:11:23.974935727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8vs8,Uid:2c4149fe-258b-4d56-9699-3c8027ef2524,Namespace:kube-system,Attempt:0,}" Dec 16 13:11:23.975054 containerd[2582]: time="2025-12-16T13:11:23.974908696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tj5zh,Uid:40ac25d7-4601-4254-b29f-0ca4ec170f77,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:24.097843 kubelet[3986]: E1216 13:11:24.097812 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:24.103338 kubelet[3986]: E1216 13:11:24.103315 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:24.147000 audit[5647]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:24.147000 audit[5647]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9de12fd0 a2=0 a3=7ffd9de12fbc items=0 ppid=4121 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:24.151000 audit[5647]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:24.151000 audit[5647]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd9de12fd0 a2=0 a3=0 items=0 ppid=4121 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:24.159855 systemd-networkd[2197]: cali0eaab594816: Link UP Dec 16 13:11:24.161313 systemd-networkd[2197]: cali0eaab594816: Gained carrier Dec 16 13:11:24.175757 containerd[2582]: 2025-12-16 13:11:24.049 [INFO][5610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0 csi-node-driver- calico-system 40ac25d7-4601-4254-b29f-0ca4ec170f77 686 0 2025-12-16 13:10:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 csi-node-driver-tj5zh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0eaab594816 [] [] }} ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-" Dec 16 13:11:24.175757 containerd[2582]: 2025-12-16 13:11:24.049 [INFO][5610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.175757 containerd[2582]: 2025-12-16 13:11:24.087 [INFO][5626] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" HandleID="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Workload="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.087 [INFO][5626] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" HandleID="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Workload="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"csi-node-driver-tj5zh", "timestamp":"2025-12-16 13:11:24.08738814 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.087 [INFO][5626] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.087 [INFO][5626] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.087 [INFO][5626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.094 [INFO][5626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.099 [INFO][5626] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.104 [INFO][5626] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.107 [INFO][5626] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.175907 containerd[2582]: 2025-12-16 13:11:24.117 [INFO][5626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.117 [INFO][5626] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.125 [INFO][5626] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.137 [INFO][5626] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.146 [INFO][5626] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.132/26] block=192.168.64.128/26 handle="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.146 [INFO][5626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.132/26] handle="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.146 [INFO][5626] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:24.176098 containerd[2582]: 2025-12-16 13:11:24.146 [INFO][5626] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.132/26] IPv6=[] ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" HandleID="k8s-pod-network.7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Workload="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.176235 containerd[2582]: 2025-12-16 13:11:24.150 [INFO][5610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40ac25d7-4601-4254-b29f-0ca4ec170f77", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"csi-node-driver-tj5zh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0eaab594816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.176305 containerd[2582]: 2025-12-16 13:11:24.150 [INFO][5610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.132/32] ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.176305 containerd[2582]: 2025-12-16 13:11:24.150 [INFO][5610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eaab594816 ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.176305 containerd[2582]: 2025-12-16 13:11:24.160 [INFO][5610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.177409 containerd[2582]: 2025-12-16 13:11:24.162 [INFO][5610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40ac25d7-4601-4254-b29f-0ca4ec170f77", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd", Pod:"csi-node-driver-tj5zh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0eaab594816", MAC:"f6:58:37:c5:40:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.177488 containerd[2582]: 2025-12-16 13:11:24.173 [INFO][5610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" Namespace="calico-system" Pod="csi-node-driver-tj5zh" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-csi--node--driver--tj5zh-eth0" Dec 16 13:11:24.185000 audit[5657]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=5657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:24.185000 audit[5657]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffcbdf2ca20 a2=0 a3=7ffcbdf2ca0c items=0 ppid=5258 pid=5657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:24.219016 containerd[2582]: time="2025-12-16T13:11:24.218961086Z" level=info msg="connecting to shim 7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd" address="unix:///run/containerd/s/0de7cfdf3ff92b347efea2bbc45dbaf4ee0c35f063f8537b141204f27c24cf28" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:24.227422 systemd-networkd[2197]: calif2b980cffd3: Link UP Dec 16 13:11:24.228677 systemd-networkd[2197]: calif2b980cffd3: Gained carrier Dec 16 13:11:24.250095 containerd[2582]: 2025-12-16 13:11:24.050 [INFO][5589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0 coredns-668d6bf9bc- kube-system 9b89c7b3-c47a-4d49-a11e-8c488d12250e 800 0 2025-12-16 13:10:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 coredns-668d6bf9bc-skmx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2b980cffd3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-" Dec 16 13:11:24.250095 containerd[2582]: 2025-12-16 13:11:24.051 [INFO][5589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250095 containerd[2582]: 2025-12-16 13:11:24.091 [INFO][5628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" HandleID="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.091 [INFO][5628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" HandleID="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"coredns-668d6bf9bc-skmx5", "timestamp":"2025-12-16 13:11:24.09121098 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.091 [INFO][5628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.147 [INFO][5628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.147 [INFO][5628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.193 [INFO][5628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.198 [INFO][5628] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.202 [INFO][5628] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.203 [INFO][5628] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250247 containerd[2582]: 2025-12-16 13:11:24.205 [INFO][5628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.205 [INFO][5628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.206 [INFO][5628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.212 [INFO][5628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.133/26] block=192.168.64.128/26 handle="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.133/26] handle="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:24.250864 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.133/26] IPv6=[] ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" HandleID="k8s-pod-network.cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.223 [INFO][5589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9b89c7b3-c47a-4d49-a11e-8c488d12250e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"coredns-668d6bf9bc-skmx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2b980cffd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.223 [INFO][5589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.133/32] ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.223 [INFO][5589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b980cffd3 ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.229 [INFO][5589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.230 [INFO][5589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9b89c7b3-c47a-4d49-a11e-8c488d12250e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff", Pod:"coredns-668d6bf9bc-skmx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2b980cffd3", MAC:"16:3e:d0:e4:d8:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.250998 containerd[2582]: 2025-12-16 13:11:24.247 [INFO][5589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" Namespace="kube-system" Pod="coredns-668d6bf9bc-skmx5" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--skmx5-eth0" Dec 16 13:11:24.251886 systemd[1]: Started cri-containerd-7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd.scope - libcontainer container 7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd. Dec 16 13:11:24.261000 audit: BPF prog-id=245 op=LOAD Dec 16 13:11:24.261000 audit: BPF prog-id=246 op=LOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=247 op=LOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=248 op=LOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=248 op=UNLOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=247 op=UNLOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.261000 audit: BPF prog-id=249 op=LOAD Dec 16 13:11:24.261000 audit[5679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5666 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764326633333235313566633062393230386464353732643665393732 Dec 16 13:11:24.275000 audit[5711]: NETFILTER_CFG table=filter:133 family=2 entries=54 op=nft_register_chain pid=5711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:24.275000 audit[5711]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffc9f9ded90 a2=0 a3=7ffc9f9ded7c items=0 ppid=5258 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:24.278064 containerd[2582]: time="2025-12-16T13:11:24.278027264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tj5zh,Uid:40ac25d7-4601-4254-b29f-0ca4ec170f77,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d2f332515fc0b9208dd572d6e9722f088d6769de988c64c10b3f051394a05dd\"" Dec 16 13:11:24.278948 containerd[2582]: time="2025-12-16T13:11:24.278848936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:11:24.304169 containerd[2582]: time="2025-12-16T13:11:24.304135990Z" level=info msg="connecting to shim cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff" address="unix:///run/containerd/s/d2120929de531272bd3dc23d712258a7e7631b6054565a7bcaa72d311b60ef7d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:24.324771 systemd-networkd[2197]: calib2ac01e202e: Gained IPv6LL Dec 16 13:11:24.326897 systemd[1]: Started cri-containerd-cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff.scope - libcontainer container cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff. Dec 16 13:11:24.332877 systemd-networkd[2197]: calibad8061d447: Link UP Dec 16 13:11:24.334978 systemd-networkd[2197]: calibad8061d447: Gained carrier Dec 16 13:11:24.349000 audit: BPF prog-id=250 op=LOAD Dec 16 13:11:24.350000 audit: BPF prog-id=251 op=LOAD Dec 16 13:11:24.350000 audit[5732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.350000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:11:24.350000 audit[5732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.351000 audit: BPF prog-id=252 op=LOAD Dec 16 13:11:24.351000 audit[5732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.352000 audit: BPF prog-id=253 op=LOAD Dec 16 13:11:24.352000 audit[5732]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.352000 audit: BPF prog-id=253 op=UNLOAD Dec 16 13:11:24.352000 audit[5732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.352000 audit: BPF prog-id=252 op=UNLOAD Dec 16 13:11:24.352000 audit[5732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.352000 audit: BPF prog-id=254 op=LOAD Dec 16 13:11:24.352000 audit[5732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5721 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613631626236306633333435333161646136363333343366663463 Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.056 [INFO][5593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0 coredns-668d6bf9bc- kube-system 2c4149fe-258b-4d56-9699-3c8027ef2524 802 0 2025-12-16 13:10:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 coredns-668d6bf9bc-q8vs8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibad8061d447 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.056 [INFO][5593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.105 [INFO][5635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" HandleID="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.105 [INFO][5635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" HandleID="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"coredns-668d6bf9bc-q8vs8", "timestamp":"2025-12-16 13:11:24.105855352 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.105 [INFO][5635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.220 [INFO][5635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.294 [INFO][5635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.300 [INFO][5635] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.305 [INFO][5635] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.307 [INFO][5635] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.309 [INFO][5635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.309 [INFO][5635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.310 [INFO][5635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5 Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.315 [INFO][5635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.325 [INFO][5635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.134/26] block=192.168.64.128/26 handle="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.326 [INFO][5635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.134/26] handle="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.326 [INFO][5635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:24.359231 containerd[2582]: 2025-12-16 13:11:24.326 [INFO][5635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.134/26] IPv6=[] ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" HandleID="k8s-pod-network.0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Workload="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.327 [INFO][5593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c4149fe-258b-4d56-9699-3c8027ef2524", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"coredns-668d6bf9bc-q8vs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad8061d447", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.327 [INFO][5593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.134/32] ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.327 [INFO][5593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibad8061d447 ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.336 [INFO][5593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.337 [INFO][5593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c4149fe-258b-4d56-9699-3c8027ef2524", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5", Pod:"coredns-668d6bf9bc-q8vs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad8061d447", MAC:"fe:f5:a9:c1:9d:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:24.359694 containerd[2582]: 2025-12-16 13:11:24.356 [INFO][5593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8vs8" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-coredns--668d6bf9bc--q8vs8-eth0" Dec 16 13:11:24.381000 audit[5766]: NETFILTER_CFG table=filter:134 family=2 entries=54 op=nft_register_chain pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:24.381000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=25572 a0=3 a1=7fff29505150 a2=0 a3=7fff2950513c items=0 ppid=5258 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.381000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:24.387274 containerd[2582]: time="2025-12-16T13:11:24.387095878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-skmx5,Uid:9b89c7b3-c47a-4d49-a11e-8c488d12250e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff\"" Dec 16 13:11:24.389425 containerd[2582]: time="2025-12-16T13:11:24.389402297Z" level=info msg="CreateContainer within sandbox \"cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:11:24.398879 containerd[2582]: time="2025-12-16T13:11:24.398717140Z" level=info msg="connecting to shim 0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5" address="unix:///run/containerd/s/bb585accfeca0dfb0385b3eeb7ff47c0b99af3d4f39cbf33da19105cd9e1db87" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:24.413501 containerd[2582]: time="2025-12-16T13:11:24.413484442Z" level=info msg="Container f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:24.414863 systemd[1]: Started cri-containerd-0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5.scope - libcontainer container 0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5. Dec 16 13:11:24.421000 audit: BPF prog-id=255 op=LOAD Dec 16 13:11:24.421000 audit: BPF prog-id=256 op=LOAD Dec 16 13:11:24.421000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.421000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:11:24.421000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.422000 audit: BPF prog-id=257 op=LOAD Dec 16 13:11:24.422000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.422000 audit: BPF prog-id=258 op=LOAD Dec 16 13:11:24.422000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.422000 audit: BPF prog-id=258 op=UNLOAD Dec 16 13:11:24.422000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.422000 audit: BPF prog-id=257 op=UNLOAD Dec 16 13:11:24.422000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.422000 audit: BPF prog-id=259 op=LOAD Dec 16 13:11:24.422000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5781 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303166323630626563386539653738303036323863313234663838 Dec 16 13:11:24.430317 containerd[2582]: time="2025-12-16T13:11:24.430281436Z" level=info msg="CreateContainer within sandbox \"cba61bb60f334531ada663343ff4cca72f45c88448f7e04c0c0669cd85af8bff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40\"" Dec 16 13:11:24.430905 containerd[2582]: time="2025-12-16T13:11:24.430849332Z" level=info msg="StartContainer for \"f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40\"" Dec 16 13:11:24.432678 containerd[2582]: time="2025-12-16T13:11:24.432441790Z" level=info msg="connecting to shim f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40" address="unix:///run/containerd/s/d2120929de531272bd3dc23d712258a7e7631b6054565a7bcaa72d311b60ef7d" protocol=ttrpc version=3 Dec 16 13:11:24.451881 systemd[1]: Started cri-containerd-f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40.scope - libcontainer container f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40. Dec 16 13:11:24.463000 audit: BPF prog-id=260 op=LOAD Dec 16 13:11:24.464000 audit: BPF prog-id=261 op=LOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=262 op=LOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=263 op=LOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=263 op=UNLOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=262 op=UNLOAD Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.464000 audit: BPF prog-id=264 op=LOAD Dec 16 13:11:24.466060 containerd[2582]: time="2025-12-16T13:11:24.465490917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8vs8,Uid:2c4149fe-258b-4d56-9699-3c8027ef2524,Namespace:kube-system,Attempt:0,} returns sandbox id \"0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5\"" Dec 16 13:11:24.464000 audit[5811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5721 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634316635616661393230386166663331636664613738633930363737 Dec 16 13:11:24.468000 containerd[2582]: time="2025-12-16T13:11:24.467980927Z" level=info msg="CreateContainer within sandbox \"0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:11:24.486128 containerd[2582]: time="2025-12-16T13:11:24.485241880Z" level=info msg="StartContainer for \"f41f5afa9208aff31cfda78c9067742b43fcb75cd421bf9d2c28eceeafec3b40\" returns successfully" Dec 16 13:11:24.488771 containerd[2582]: time="2025-12-16T13:11:24.488748519Z" level=info msg="Container 4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:11:24.517199 containerd[2582]: time="2025-12-16T13:11:24.517174700Z" level=info msg="CreateContainer within sandbox \"0601f260bec8e9e7800628c124f8870cf4523726b08b73d8be81dcacdc580ed5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414\"" Dec 16 13:11:24.517545 containerd[2582]: time="2025-12-16T13:11:24.517481132Z" level=info msg="StartContainer for \"4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414\"" Dec 16 13:11:24.518429 containerd[2582]: time="2025-12-16T13:11:24.518395672Z" level=info msg="connecting to shim 4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414" address="unix:///run/containerd/s/bb585accfeca0dfb0385b3eeb7ff47c0b99af3d4f39cbf33da19105cd9e1db87" protocol=ttrpc version=3 Dec 16 13:11:24.526625 containerd[2582]: time="2025-12-16T13:11:24.526596266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:24.529611 containerd[2582]: time="2025-12-16T13:11:24.529583557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:11:24.529611 containerd[2582]: time="2025-12-16T13:11:24.529633293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:24.529740 kubelet[3986]: E1216 13:11:24.529691 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:11:24.529779 kubelet[3986]: E1216 13:11:24.529752 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:11:24.529887 kubelet[3986]: E1216 13:11:24.529853 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:24.531498 containerd[2582]: time="2025-12-16T13:11:24.531439611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:11:24.531863 systemd[1]: Started cri-containerd-4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414.scope - libcontainer container 4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414. Dec 16 13:11:24.541000 audit: BPF prog-id=265 op=LOAD Dec 16 13:11:24.542000 audit: BPF prog-id=266 op=LOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=267 op=LOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=268 op=LOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=268 op=UNLOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.542000 audit: BPF prog-id=269 op=LOAD Dec 16 13:11:24.542000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5781 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:24.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316632383261623762363233656138306562353165386363363034 Dec 16 13:11:24.562598 containerd[2582]: time="2025-12-16T13:11:24.562525239Z" level=info msg="StartContainer for \"4d1f282ab7b623ea80eb51e8cc604d089d086cc3a1ff74243604d70600c8a414\" returns successfully" Dec 16 13:11:24.804938 containerd[2582]: time="2025-12-16T13:11:24.804838881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:24.822354 containerd[2582]: time="2025-12-16T13:11:24.822318421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:11:24.822487 containerd[2582]: time="2025-12-16T13:11:24.822377183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:24.822515 kubelet[3986]: E1216 13:11:24.822478 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:11:24.822565 kubelet[3986]: E1216 13:11:24.822520 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:11:24.822730 kubelet[3986]: E1216 13:11:24.822638 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:24.824453 kubelet[3986]: E1216 13:11:24.824424 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:24.964880 systemd-networkd[2197]: cali8aafe92ec5e: Gained IPv6LL Dec 16 13:11:25.110595 kubelet[3986]: E1216 13:11:25.110301 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:25.110595 kubelet[3986]: E1216 13:11:25.110392 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:25.110935 kubelet[3986]: E1216 13:11:25.110680 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:25.134871 kubelet[3986]: I1216 13:11:25.134636 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-skmx5" podStartSLOduration=39.134622821 podStartE2EDuration="39.134622821s" podCreationTimestamp="2025-12-16 13:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:25.120570994 +0000 UTC m=+46.236112914" watchObservedRunningTime="2025-12-16 13:11:25.134622821 +0000 UTC m=+46.250164742" Dec 16 13:11:25.145164 kernel: kauditd_printk_skb: 406 callbacks suppressed Dec 16 13:11:25.145234 kernel: audit: type=1325 audit(1765890685.140:742): table=filter:135 family=2 entries=20 op=nft_register_rule pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.140000 audit[5882]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.150223 kernel: audit: type=1300 audit(1765890685.140:742): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb0948a70 a2=0 a3=7fffb0948a5c items=0 ppid=4121 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.140000 audit[5882]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb0948a70 a2=0 a3=7fffb0948a5c items=0 ppid=4121 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.158651 kernel: audit: type=1327 audit(1765890685.140:742): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.158727 kernel: audit: type=1325 audit(1765890685.152:743): table=nat:136 family=2 entries=14 op=nft_register_rule pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.152000 audit[5882]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.164039 kernel: audit: type=1300 audit(1765890685.152:743): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffb0948a70 a2=0 a3=0 items=0 ppid=4121 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.152000 audit[5882]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffb0948a70 a2=0 a3=0 items=0 ppid=4121 pid=5882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.173724 kernel: audit: type=1327 audit(1765890685.152:743): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.180000 audit[5884]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=5884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.183731 kernel: audit: type=1325 audit(1765890685.180:744): table=filter:137 family=2 entries=17 op=nft_register_rule pid=5884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.180000 audit[5884]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd9516500 a2=0 a3=7ffdd95164ec items=0 ppid=4121 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.190947 kernel: audit: type=1300 audit(1765890685.180:744): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd9516500 a2=0 a3=7ffdd95164ec items=0 ppid=4121 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.191003 kernel: audit: type=1327 audit(1765890685.180:744): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.193000 audit[5884]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=5884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.198739 kernel: audit: type=1325 audit(1765890685.193:745): table=nat:138 family=2 entries=35 op=nft_register_chain pid=5884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:25.193000 audit[5884]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdd9516500 a2=0 a3=7ffdd95164ec items=0 ppid=4121 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:25.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:25.216870 kubelet[3986]: I1216 13:11:25.216587 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q8vs8" podStartSLOduration=39.216573957 podStartE2EDuration="39.216573957s" podCreationTimestamp="2025-12-16 13:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:11:25.216232835 +0000 UTC m=+46.331774757" watchObservedRunningTime="2025-12-16 13:11:25.216573957 +0000 UTC m=+46.332115878" Dec 16 13:11:25.284917 systemd-networkd[2197]: cali0eaab594816: Gained IPv6LL Dec 16 13:11:25.974893 containerd[2582]: time="2025-12-16T13:11:25.974860129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7sznl,Uid:c9d340d2-955d-4739-b7cb-fc9a188575e3,Namespace:calico-system,Attempt:0,}" Dec 16 13:11:25.989138 systemd-networkd[2197]: calif2b980cffd3: Gained IPv6LL Dec 16 13:11:25.990043 systemd-networkd[2197]: calibad8061d447: Gained IPv6LL Dec 16 13:11:26.072245 systemd-networkd[2197]: calia1a866dc9c1: Link UP Dec 16 13:11:26.073371 systemd-networkd[2197]: calia1a866dc9c1: Gained carrier Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.017 [INFO][5885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0 goldmane-666569f655- calico-system c9d340d2-955d-4739-b7cb-fc9a188575e3 801 0 2025-12-16 13:10:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 goldmane-666569f655-7sznl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia1a866dc9c1 [] [] }} ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.018 [INFO][5885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.035 [INFO][5898] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" HandleID="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Workload="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.035 [INFO][5898] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" HandleID="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Workload="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-e647365c22", "pod":"goldmane-666569f655-7sznl", "timestamp":"2025-12-16 13:11:26.035454915 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.035 [INFO][5898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.035 [INFO][5898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.035 [INFO][5898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.039 [INFO][5898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.042 [INFO][5898] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.044 [INFO][5898] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.045 [INFO][5898] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.048 [INFO][5898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.048 [INFO][5898] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.050 [INFO][5898] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68 Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.056 [INFO][5898] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.065 [INFO][5898] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.135/26] block=192.168.64.128/26 handle="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.066 [INFO][5898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.135/26] handle="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.066 [INFO][5898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:26.091805 containerd[2582]: 2025-12-16 13:11:26.066 [INFO][5898] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.135/26] IPv6=[] ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" HandleID="k8s-pod-network.baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Workload="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.068 [INFO][5885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c9d340d2-955d-4739-b7cb-fc9a188575e3", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"goldmane-666569f655-7sznl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1a866dc9c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.068 [INFO][5885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.135/32] ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.068 [INFO][5885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1a866dc9c1 ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.075 [INFO][5885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.076 [INFO][5885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c9d340d2-955d-4739-b7cb-fc9a188575e3", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68", Pod:"goldmane-666569f655-7sznl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1a866dc9c1", MAC:"6a:03:77:60:5e:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:26.092533 containerd[2582]: 2025-12-16 13:11:26.089 [INFO][5885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" Namespace="calico-system" Pod="goldmane-666569f655-7sznl" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-goldmane--666569f655--7sznl-eth0" Dec 16 13:11:26.107000 audit[5913]: NETFILTER_CFG table=filter:139 family=2 entries=60 op=nft_register_chain pid=5913 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:26.107000 audit[5913]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffc1887bb30 a2=0 a3=7ffc1887bb1c items=0 ppid=5258 pid=5913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.107000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:26.111501 kubelet[3986]: E1216 13:11:26.111437 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:26.137448 containerd[2582]: time="2025-12-16T13:11:26.137419900Z" level=info msg="connecting to shim baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68" address="unix:///run/containerd/s/b6079120348687c3e548edcc4e931a2f515afd4e0d2b562064032b533d24b38c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:26.163897 systemd[1]: Started cri-containerd-baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68.scope - libcontainer container baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68. Dec 16 13:11:26.177000 audit: BPF prog-id=270 op=LOAD Dec 16 13:11:26.178000 audit: BPF prog-id=271 op=LOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.178000 audit: BPF prog-id=271 op=UNLOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.178000 audit: BPF prog-id=272 op=LOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.178000 audit: BPF prog-id=273 op=LOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.178000 audit: BPF prog-id=273 op=UNLOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.178000 audit: BPF prog-id=272 op=UNLOAD Dec 16 13:11:26.178000 audit[5935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.179000 audit: BPF prog-id=274 op=LOAD Dec 16 13:11:26.179000 audit[5935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5923 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663166303236643665663135373639306137303433363936636331 Dec 16 13:11:26.216124 containerd[2582]: time="2025-12-16T13:11:26.216092992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7sznl,Uid:c9d340d2-955d-4739-b7cb-fc9a188575e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"baf1f026d6ef157690a7043696cc11fd0779218843bf7ba30712ccd7289d5c68\"" Dec 16 13:11:26.217628 containerd[2582]: time="2025-12-16T13:11:26.217610762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:11:26.218000 audit[5961]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:26.218000 audit[5961]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb3f75260 a2=0 a3=7ffdb3f7524c items=0 ppid=4121 pid=5961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:26.223000 audit[5961]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:26.223000 audit[5961]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdb3f75260 a2=0 a3=7ffdb3f7524c items=0 ppid=4121 pid=5961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:26.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:26.467147 containerd[2582]: time="2025-12-16T13:11:26.467124976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:26.470281 containerd[2582]: time="2025-12-16T13:11:26.470256876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:11:26.470322 containerd[2582]: time="2025-12-16T13:11:26.470272034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:26.470415 kubelet[3986]: E1216 13:11:26.470392 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:11:26.470455 kubelet[3986]: E1216 13:11:26.470426 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:11:26.470590 kubelet[3986]: E1216 13:11:26.470555 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzznm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:26.471791 kubelet[3986]: E1216 13:11:26.471747 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:26.974290 containerd[2582]: time="2025-12-16T13:11:26.974166847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-z274w,Uid:f055798a-e699-42ea-8c24-f7896c6361d5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:11:27.060569 systemd-networkd[2197]: cali72c988a8201: Link UP Dec 16 13:11:27.062819 systemd-networkd[2197]: cali72c988a8201: Gained carrier Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.009 [INFO][5964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0 calico-apiserver-67f6997d77- calico-apiserver f055798a-e699-42ea-8c24-f7896c6361d5 803 0 2025-12-16 13:10:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f6997d77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-e647365c22 calico-apiserver-67f6997d77-z274w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72c988a8201 [] [] }} ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.009 [INFO][5964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.026 [INFO][5975] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" HandleID="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.026 [INFO][5975] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" HandleID="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-e647365c22", "pod":"calico-apiserver-67f6997d77-z274w", "timestamp":"2025-12-16 13:11:27.026648445 +0000 UTC"}, Hostname:"ci-4547.0.0-a-e647365c22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.026 [INFO][5975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.026 [INFO][5975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.026 [INFO][5975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-e647365c22' Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.031 [INFO][5975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.033 [INFO][5975] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.037 [INFO][5975] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.038 [INFO][5975] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.040 [INFO][5975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.040 [INFO][5975] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.041 [INFO][5975] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.045 [INFO][5975] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.056 [INFO][5975] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.136/26] block=192.168.64.128/26 handle="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.056 [INFO][5975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.136/26] handle="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" host="ci-4547.0.0-a-e647365c22" Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.056 [INFO][5975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:11:27.080098 containerd[2582]: 2025-12-16 13:11:27.056 [INFO][5975] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.136/26] IPv6=[] ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" HandleID="k8s-pod-network.824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Workload="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.057 [INFO][5964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0", GenerateName:"calico-apiserver-67f6997d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"f055798a-e699-42ea-8c24-f7896c6361d5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6997d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"", Pod:"calico-apiserver-67f6997d77-z274w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72c988a8201", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.057 [INFO][5964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.136/32] ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.057 [INFO][5964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72c988a8201 ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.063 [INFO][5964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.063 [INFO][5964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0", GenerateName:"calico-apiserver-67f6997d77-", Namespace:"calico-apiserver", SelfLink:"", UID:"f055798a-e699-42ea-8c24-f7896c6361d5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6997d77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-e647365c22", ContainerID:"824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e", Pod:"calico-apiserver-67f6997d77-z274w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72c988a8201", MAC:"f2:e2:d7:27:03:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:11:27.080743 containerd[2582]: 2025-12-16 13:11:27.078 [INFO][5964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" Namespace="calico-apiserver" Pod="calico-apiserver-67f6997d77-z274w" WorkloadEndpoint="ci--4547.0.0--a--e647365c22-k8s-calico--apiserver--67f6997d77--z274w-eth0" Dec 16 13:11:27.095000 audit[5990]: NETFILTER_CFG table=filter:142 family=2 entries=57 op=nft_register_chain pid=5990 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:11:27.095000 audit[5990]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffefb884f50 a2=0 a3=7ffefb884f3c items=0 ppid=5258 pid=5990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:11:27.111582 kubelet[3986]: E1216 13:11:27.111551 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:27.140728 containerd[2582]: time="2025-12-16T13:11:27.140270245Z" level=info msg="connecting to shim 824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e" address="unix:///run/containerd/s/55b198c6aef2b6a35f67030f14e8c2a4e26f673adca8d2b537beae93ea18d84d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:11:27.160875 systemd[1]: Started cri-containerd-824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e.scope - libcontainer container 824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e. Dec 16 13:11:27.168000 audit: BPF prog-id=275 op=LOAD Dec 16 13:11:27.169000 audit: BPF prog-id=276 op=LOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=276 op=UNLOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=277 op=LOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=278 op=LOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=278 op=UNLOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=277 op=UNLOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.169000 audit: BPF prog-id=279 op=LOAD Dec 16 13:11:27.169000 audit[6012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6001 pid=6012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343834316430636533386563363266366135303636363739343131 Dec 16 13:11:27.198270 containerd[2582]: time="2025-12-16T13:11:27.198252633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6997d77-z274w,Uid:f055798a-e699-42ea-8c24-f7896c6361d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"824841d0ce38ec62f6a506667941187b9556b9680c0d07ff58777d94a4dee08e\"" Dec 16 13:11:27.199477 containerd[2582]: time="2025-12-16T13:11:27.199311301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:11:27.238000 audit[6040]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=6040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:27.238000 audit[6040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5b56cda0 a2=0 a3=7ffc5b56cd8c items=0 ppid=4121 pid=6040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:27.243000 audit[6040]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:27.243000 audit[6040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc5b56cda0 a2=0 a3=7ffc5b56cd8c items=0 ppid=4121 pid=6040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:27.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:27.443326 containerd[2582]: time="2025-12-16T13:11:27.443291848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:27.446152 containerd[2582]: time="2025-12-16T13:11:27.446124554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:11:27.446255 containerd[2582]: time="2025-12-16T13:11:27.446177334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:27.446282 kubelet[3986]: E1216 13:11:27.446252 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:27.446314 kubelet[3986]: E1216 13:11:27.446288 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:27.446404 kubelet[3986]: E1216 13:11:27.446377 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dljlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:27.447610 kubelet[3986]: E1216 13:11:27.447582 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:27.972843 systemd-networkd[2197]: calia1a866dc9c1: Gained IPv6LL Dec 16 13:11:28.115288 kubelet[3986]: E1216 13:11:28.115263 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:28.116227 kubelet[3986]: E1216 13:11:28.116192 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:28.222000 audit[6048]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=6048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:28.222000 audit[6048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5f0491b0 a2=0 a3=7fff5f04919c items=0 ppid=4121 pid=6048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:28.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:28.227000 audit[6048]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:11:28.227000 audit[6048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5f0491b0 a2=0 a3=7fff5f04919c items=0 ppid=4121 pid=6048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:11:28.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:11:28.292812 systemd-networkd[2197]: cali72c988a8201: Gained IPv6LL Dec 16 13:11:29.115228 kubelet[3986]: E1216 13:11:29.115100 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:33.975661 containerd[2582]: time="2025-12-16T13:11:33.975603816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:11:34.239504 containerd[2582]: time="2025-12-16T13:11:34.239410068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:34.242263 containerd[2582]: time="2025-12-16T13:11:34.242227804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:11:34.242339 containerd[2582]: time="2025-12-16T13:11:34.242284834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:34.242427 kubelet[3986]: E1216 13:11:34.242376 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:11:34.242908 kubelet[3986]: E1216 13:11:34.242434 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:11:34.242908 kubelet[3986]: E1216 13:11:34.242533 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:790b37891fd14fdf909b39bfc1c2dcaf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:34.244919 containerd[2582]: time="2025-12-16T13:11:34.244885210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:11:34.495415 containerd[2582]: time="2025-12-16T13:11:34.494960463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:34.498056 containerd[2582]: time="2025-12-16T13:11:34.498022563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:11:34.498132 containerd[2582]: time="2025-12-16T13:11:34.498077955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:34.498238 kubelet[3986]: E1216 13:11:34.498215 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:11:34.498276 kubelet[3986]: E1216 13:11:34.498249 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:11:34.498381 kubelet[3986]: E1216 13:11:34.498334 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:34.499509 kubelet[3986]: E1216 13:11:34.499453 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:11:36.975555 containerd[2582]: time="2025-12-16T13:11:36.975261020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:11:37.230967 containerd[2582]: time="2025-12-16T13:11:37.230875799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:37.233873 containerd[2582]: time="2025-12-16T13:11:37.233822353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:11:37.233996 containerd[2582]: time="2025-12-16T13:11:37.233828403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:37.234022 kubelet[3986]: E1216 13:11:37.233979 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:37.234022 kubelet[3986]: E1216 13:11:37.234014 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:37.234257 kubelet[3986]: E1216 13:11:37.234127 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ftln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:37.235579 kubelet[3986]: E1216 13:11:37.235537 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:38.975649 containerd[2582]: time="2025-12-16T13:11:38.975307273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:11:39.230068 containerd[2582]: time="2025-12-16T13:11:39.229987545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:39.233264 containerd[2582]: time="2025-12-16T13:11:39.233236902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:11:39.233342 containerd[2582]: time="2025-12-16T13:11:39.233288590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:39.233421 kubelet[3986]: E1216 13:11:39.233378 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:11:39.233628 kubelet[3986]: E1216 13:11:39.233427 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:11:39.233628 kubelet[3986]: E1216 13:11:39.233544 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzznm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:39.234956 kubelet[3986]: E1216 13:11:39.234909 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:39.974663 containerd[2582]: time="2025-12-16T13:11:39.974582659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:11:40.227043 containerd[2582]: time="2025-12-16T13:11:40.226974325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:40.229892 containerd[2582]: time="2025-12-16T13:11:40.229852572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:11:40.229950 containerd[2582]: time="2025-12-16T13:11:40.229906329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:40.230036 kubelet[3986]: E1216 13:11:40.230004 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:11:40.230075 kubelet[3986]: E1216 13:11:40.230044 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:11:40.230261 kubelet[3986]: E1216 13:11:40.230200 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59k48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:40.230612 containerd[2582]: time="2025-12-16T13:11:40.230591881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:11:40.231938 kubelet[3986]: E1216 13:11:40.231903 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:40.489431 containerd[2582]: time="2025-12-16T13:11:40.489365672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:40.492341 containerd[2582]: time="2025-12-16T13:11:40.492299438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:11:40.492506 containerd[2582]: time="2025-12-16T13:11:40.492309265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:40.492537 kubelet[3986]: E1216 13:11:40.492475 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:40.492537 kubelet[3986]: E1216 13:11:40.492511 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:11:40.492787 kubelet[3986]: E1216 13:11:40.492616 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dljlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:40.494040 kubelet[3986]: E1216 13:11:40.494005 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:40.975880 containerd[2582]: time="2025-12-16T13:11:40.975753633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:11:41.215708 containerd[2582]: time="2025-12-16T13:11:41.215672349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:41.218537 containerd[2582]: time="2025-12-16T13:11:41.218505190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:11:41.218605 containerd[2582]: time="2025-12-16T13:11:41.218557181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:41.218655 kubelet[3986]: E1216 13:11:41.218618 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:11:41.218688 kubelet[3986]: E1216 13:11:41.218650 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:11:41.219016 kubelet[3986]: E1216 13:11:41.218791 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:41.221024 containerd[2582]: time="2025-12-16T13:11:41.221000361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:11:41.469814 containerd[2582]: time="2025-12-16T13:11:41.469778000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:11:41.472641 containerd[2582]: time="2025-12-16T13:11:41.472611279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:11:41.472695 containerd[2582]: time="2025-12-16T13:11:41.472625786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:11:41.472844 kubelet[3986]: E1216 13:11:41.472822 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:11:41.472901 kubelet[3986]: E1216 13:11:41.472866 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:11:41.473151 kubelet[3986]: E1216 13:11:41.472960 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:11:41.474155 kubelet[3986]: E1216 13:11:41.474127 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:47.974596 kubelet[3986]: E1216 13:11:47.974553 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:11:47.976959 kubelet[3986]: E1216 13:11:47.976623 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:11:49.975354 kubelet[3986]: E1216 13:11:49.975310 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:11:52.976041 kubelet[3986]: E1216 13:11:52.975976 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:11:53.976113 kubelet[3986]: E1216 13:11:53.976076 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:11:54.976593 kubelet[3986]: E1216 13:11:54.976545 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:11:59.515211 update_engine[2541]: I20251216 13:11:59.514825 2541 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:11:59.515211 update_engine[2541]: I20251216 13:11:59.514899 2541 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:11:59.515211 update_engine[2541]: I20251216 13:11:59.515043 2541 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:11:59.516007 update_engine[2541]: I20251216 13:11:59.515950 2541 omaha_request_params.cc:62] Current group set to alpha Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516300 2541 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516314 2541 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516331 2541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516364 2541 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516417 2541 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516420 2541 omaha_request_action.cc:272] Request: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: Dec 16 13:11:59.516736 update_engine[2541]: I20251216 13:11:59.516425 2541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:11:59.518536 locksmithd[2637]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:11:59.518987 update_engine[2541]: I20251216 13:11:59.517969 2541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:11:59.518987 update_engine[2541]: I20251216 13:11:59.518483 2541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:11:59.550429 update_engine[2541]: E20251216 13:11:59.550329 2541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:11:59.550429 update_engine[2541]: I20251216 13:11:59.550411 2541 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:12:01.975810 containerd[2582]: time="2025-12-16T13:12:01.975753996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:12:02.276684 containerd[2582]: time="2025-12-16T13:12:02.276483023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:02.279688 containerd[2582]: time="2025-12-16T13:12:02.279625902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:12:02.279958 containerd[2582]: time="2025-12-16T13:12:02.279675049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:02.280246 kubelet[3986]: E1216 13:12:02.280194 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:12:02.280246 kubelet[3986]: E1216 13:12:02.280233 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:12:02.282300 kubelet[3986]: E1216 13:12:02.280964 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:790b37891fd14fdf909b39bfc1c2dcaf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:02.282802 containerd[2582]: time="2025-12-16T13:12:02.282681863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:12:02.529054 containerd[2582]: time="2025-12-16T13:12:02.528973096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:02.537615 containerd[2582]: time="2025-12-16T13:12:02.537524429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:12:02.537615 containerd[2582]: time="2025-12-16T13:12:02.537595732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:02.538624 kubelet[3986]: E1216 13:12:02.537856 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:12:02.538624 kubelet[3986]: E1216 13:12:02.537891 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:12:02.538624 kubelet[3986]: E1216 13:12:02.538096 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzznm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:02.539029 containerd[2582]: time="2025-12-16T13:12:02.538974256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:12:02.540801 kubelet[3986]: E1216 13:12:02.540765 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:12:02.780245 containerd[2582]: time="2025-12-16T13:12:02.779735487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:02.782565 containerd[2582]: time="2025-12-16T13:12:02.782487527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:12:02.782565 containerd[2582]: time="2025-12-16T13:12:02.782549523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:02.782767 kubelet[3986]: E1216 13:12:02.782742 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:02.782812 kubelet[3986]: E1216 13:12:02.782776 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:02.783001 kubelet[3986]: E1216 13:12:02.782970 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ftln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:02.783225 containerd[2582]: time="2025-12-16T13:12:02.783122938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:12:02.784430 kubelet[3986]: E1216 13:12:02.784397 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:12:03.037947 containerd[2582]: time="2025-12-16T13:12:03.037875898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:03.040978 containerd[2582]: time="2025-12-16T13:12:03.040924148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:12:03.041036 containerd[2582]: time="2025-12-16T13:12:03.040998696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:03.041118 kubelet[3986]: E1216 13:12:03.041095 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:12:03.041164 kubelet[3986]: E1216 13:12:03.041124 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:12:03.041263 kubelet[3986]: E1216 13:12:03.041223 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:03.043087 kubelet[3986]: E1216 13:12:03.043022 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:12:07.975642 containerd[2582]: time="2025-12-16T13:12:07.975415747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:12:08.238268 containerd[2582]: time="2025-12-16T13:12:08.238177675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:08.241080 containerd[2582]: time="2025-12-16T13:12:08.241044390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:12:08.241145 containerd[2582]: time="2025-12-16T13:12:08.241118801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:08.241271 kubelet[3986]: E1216 13:12:08.241230 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:08.241552 kubelet[3986]: E1216 13:12:08.241273 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:08.241552 kubelet[3986]: E1216 13:12:08.241488 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dljlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:08.242166 containerd[2582]: time="2025-12-16T13:12:08.242126710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:12:08.243545 kubelet[3986]: E1216 13:12:08.243463 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:12:08.491831 containerd[2582]: time="2025-12-16T13:12:08.491746935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:08.496073 containerd[2582]: time="2025-12-16T13:12:08.496015598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:12:08.496073 containerd[2582]: time="2025-12-16T13:12:08.496055867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:08.496192 kubelet[3986]: E1216 13:12:08.496169 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:12:08.496247 kubelet[3986]: E1216 13:12:08.496200 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:12:08.496407 kubelet[3986]: E1216 13:12:08.496369 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59k48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:08.496791 containerd[2582]: time="2025-12-16T13:12:08.496771564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:12:08.498121 kubelet[3986]: E1216 13:12:08.498091 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:12:08.744772 containerd[2582]: time="2025-12-16T13:12:08.743425989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:08.747762 containerd[2582]: time="2025-12-16T13:12:08.747674633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:12:08.747762 containerd[2582]: time="2025-12-16T13:12:08.747716338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:08.747862 kubelet[3986]: E1216 13:12:08.747839 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:12:08.747894 kubelet[3986]: E1216 13:12:08.747885 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:12:08.748255 kubelet[3986]: E1216 13:12:08.747997 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:08.750233 containerd[2582]: time="2025-12-16T13:12:08.750077011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:12:08.994193 containerd[2582]: time="2025-12-16T13:12:08.994162479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:08.997212 containerd[2582]: time="2025-12-16T13:12:08.997105020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:12:08.997212 containerd[2582]: time="2025-12-16T13:12:08.997145034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:08.997715 kubelet[3986]: E1216 13:12:08.997602 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:12:08.997715 kubelet[3986]: E1216 13:12:08.997649 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:12:08.998229 kubelet[3986]: E1216 13:12:08.998008 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:08.999416 kubelet[3986]: E1216 13:12:08.999378 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:12:09.493187 update_engine[2541]: I20251216 13:12:09.493126 2541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:09.493442 update_engine[2541]: I20251216 13:12:09.493212 2541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:09.493526 update_engine[2541]: I20251216 13:12:09.493505 2541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:09.522120 update_engine[2541]: E20251216 13:12:09.522083 2541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:12:09.522195 update_engine[2541]: I20251216 13:12:09.522171 2541 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:12:15.975688 kubelet[3986]: E1216 13:12:15.975451 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:12:15.977647 kubelet[3986]: E1216 13:12:15.977150 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:12:15.977647 kubelet[3986]: E1216 13:12:15.977448 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:12:19.490714 update_engine[2541]: I20251216 13:12:19.490658 2541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:19.491044 update_engine[2541]: I20251216 13:12:19.490747 2541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:19.491068 update_engine[2541]: I20251216 13:12:19.491044 2541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:19.522323 update_engine[2541]: E20251216 13:12:19.522212 2541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:12:19.522323 update_engine[2541]: I20251216 13:12:19.522296 2541 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:12:19.976811 kubelet[3986]: E1216 13:12:19.976770 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:12:21.976993 kubelet[3986]: E1216 13:12:21.976472 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:12:21.977682 kubelet[3986]: E1216 13:12:21.977658 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:12:25.669104 kernel: kauditd_printk_skb: 70 callbacks suppressed Dec 16 13:12:25.669343 kernel: audit: type=1130 audit(1765890745.667:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.11:22-10.200.16.10:48384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:25.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.11:22-10.200.16.10:48384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:25.667669 systemd[1]: Started sshd@7-10.200.8.11:22-10.200.16.10:48384.service - OpenSSH per-connection server daemon (10.200.16.10:48384). Dec 16 13:12:26.213819 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 48384 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:26.213000 audit[6139]: USER_ACCT pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.221715 kernel: audit: type=1101 audit(1765890746.213:771): pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.222610 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:26.221000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.231803 kernel: audit: type=1103 audit(1765890746.221:772): pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.233620 systemd-logind[2540]: New session 11 of user core. Dec 16 13:12:26.240363 kernel: audit: type=1006 audit(1765890746.221:773): pid=6139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 13:12:26.240421 kernel: audit: type=1300 audit(1765890746.221:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8b6bbac0 a2=3 a3=0 items=0 ppid=1 pid=6139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:26.221000 audit[6139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8b6bbac0 a2=3 a3=0 items=0 ppid=1 pid=6139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:26.221000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:26.243719 kernel: audit: type=1327 audit(1765890746.221:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:26.243874 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:12:26.247000 audit[6139]: USER_START pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.254731 kernel: audit: type=1105 audit(1765890746.247:774): pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.253000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.261726 kernel: audit: type=1103 audit(1765890746.253:775): pid=6143 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.634077 sshd[6143]: Connection closed by 10.200.16.10 port 48384 Dec 16 13:12:26.634316 sshd-session[6139]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:26.636000 audit[6139]: USER_END pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.642221 systemd[1]: sshd@7-10.200.8.11:22-10.200.16.10:48384.service: Deactivated successfully. Dec 16 13:12:26.644223 kernel: audit: type=1106 audit(1765890746.636:776): pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.648852 kernel: audit: type=1104 audit(1765890746.636:777): pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.636000 audit[6139]: CRED_DISP pid=6139 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:26.645478 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:12:26.648537 systemd-logind[2540]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:12:26.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.11:22-10.200.16.10:48384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:26.649484 systemd-logind[2540]: Removed session 11. Dec 16 13:12:26.976385 kubelet[3986]: E1216 13:12:26.975395 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:12:29.487591 update_engine[2541]: I20251216 13:12:29.487520 2541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:29.487987 update_engine[2541]: I20251216 13:12:29.487622 2541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:29.487987 update_engine[2541]: I20251216 13:12:29.487958 2541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:29.518800 update_engine[2541]: E20251216 13:12:29.518098 2541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518188 2541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518198 2541 omaha_request_action.cc:617] Omaha request response: Dec 16 13:12:29.518800 update_engine[2541]: E20251216 13:12:29.518270 2541 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518286 2541 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518290 2541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518296 2541 update_attempter.cc:306] Processing Done. Dec 16 13:12:29.518800 update_engine[2541]: E20251216 13:12:29.518311 2541 update_attempter.cc:619] Update failed. Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518316 2541 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518321 2541 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518328 2541 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518409 2541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518430 2541 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:12:29.518800 update_engine[2541]: I20251216 13:12:29.518435 2541 omaha_request_action.cc:272] Request: Dec 16 13:12:29.518800 update_engine[2541]: Dec 16 13:12:29.518800 update_engine[2541]: Dec 16 13:12:29.519195 update_engine[2541]: Dec 16 13:12:29.519195 update_engine[2541]: Dec 16 13:12:29.519195 update_engine[2541]: Dec 16 13:12:29.519195 update_engine[2541]: Dec 16 13:12:29.519195 update_engine[2541]: I20251216 13:12:29.518441 2541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:29.519195 update_engine[2541]: I20251216 13:12:29.518463 2541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:29.519195 update_engine[2541]: I20251216 13:12:29.518761 2541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:29.519654 locksmithd[2637]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:12:29.581799 update_engine[2541]: E20251216 13:12:29.581312 2541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581385 2541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581393 2541 omaha_request_action.cc:617] Omaha request response: Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581400 2541 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581404 2541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581409 2541 update_attempter.cc:306] Processing Done. Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581418 2541 update_attempter.cc:310] Error event sent. Dec 16 13:12:29.581799 update_engine[2541]: I20251216 13:12:29.581425 2541 update_check_scheduler.cc:74] Next update check in 41m31s Dec 16 13:12:29.583755 locksmithd[2637]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:12:29.976306 kubelet[3986]: E1216 13:12:29.976261 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:12:29.977076 kubelet[3986]: E1216 13:12:29.976695 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:12:31.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.11:22-10.200.16.10:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:31.756301 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:12:31.756330 kernel: audit: type=1130 audit(1765890751.754:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.11:22-10.200.16.10:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:31.754608 systemd[1]: Started sshd@8-10.200.8.11:22-10.200.16.10:33340.service - OpenSSH per-connection server daemon (10.200.16.10:33340). Dec 16 13:12:32.290000 audit[6157]: USER_ACCT pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.295089 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:32.296479 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 33340 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:32.296812 kernel: audit: type=1101 audit(1765890752.290:780): pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.292000 audit[6157]: CRED_ACQ pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.304104 kernel: audit: type=1103 audit(1765890752.292:781): pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.304201 kernel: audit: type=1006 audit(1765890752.293:782): pid=6157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 13:12:32.303474 systemd-logind[2540]: New session 12 of user core. Dec 16 13:12:32.307729 kernel: audit: type=1300 audit(1765890752.293:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8a030330 a2=3 a3=0 items=0 ppid=1 pid=6157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:32.293000 audit[6157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8a030330 a2=3 a3=0 items=0 ppid=1 pid=6157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:32.293000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:32.310285 kernel: audit: type=1327 audit(1765890752.293:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:32.310877 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:12:32.314000 audit[6157]: USER_START pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.320729 kernel: audit: type=1105 audit(1765890752.314:783): pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.321000 audit[6161]: CRED_ACQ pid=6161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.328733 kernel: audit: type=1103 audit(1765890752.321:784): pid=6161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.639734 sshd[6161]: Connection closed by 10.200.16.10 port 33340 Dec 16 13:12:32.639941 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:32.641000 audit[6157]: USER_END pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.644549 systemd[1]: sshd@8-10.200.8.11:22-10.200.16.10:33340.service: Deactivated successfully. Dec 16 13:12:32.646734 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:12:32.641000 audit[6157]: CRED_DISP pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.649434 systemd-logind[2540]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:12:32.650231 systemd-logind[2540]: Removed session 12. Dec 16 13:12:32.653057 kernel: audit: type=1106 audit(1765890752.641:785): pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.653112 kernel: audit: type=1104 audit(1765890752.641:786): pid=6157 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:32.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.11:22-10.200.16.10:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:32.977978 kubelet[3986]: E1216 13:12:32.977741 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:12:32.980997 kubelet[3986]: E1216 13:12:32.980951 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:12:34.976841 kubelet[3986]: E1216 13:12:34.976796 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:12:37.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:37.753082 systemd[1]: Started sshd@9-10.200.8.11:22-10.200.16.10:33342.service - OpenSSH per-connection server daemon (10.200.16.10:33342). Dec 16 13:12:37.754327 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:12:37.754384 kernel: audit: type=1130 audit(1765890757.751:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:38.293000 audit[6174]: USER_ACCT pid=6174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.298731 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:38.296000 audit[6174]: CRED_ACQ pid=6174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.299882 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 33342 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:38.301838 kernel: audit: type=1101 audit(1765890758.293:789): pid=6174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.301894 kernel: audit: type=1103 audit(1765890758.296:790): pid=6174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.305323 kernel: audit: type=1006 audit(1765890758.296:791): pid=6174 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 13:12:38.296000 audit[6174]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda8370d70 a2=3 a3=0 items=0 ppid=1 pid=6174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:38.308984 kernel: audit: type=1300 audit(1765890758.296:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda8370d70 a2=3 a3=0 items=0 ppid=1 pid=6174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:38.309918 systemd-logind[2540]: New session 13 of user core. Dec 16 13:12:38.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:38.314718 kernel: audit: type=1327 audit(1765890758.296:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:38.317890 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:12:38.319000 audit[6174]: USER_START pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.323000 audit[6178]: CRED_ACQ pid=6178 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.328875 kernel: audit: type=1105 audit(1765890758.319:792): pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.328934 kernel: audit: type=1103 audit(1765890758.323:793): pid=6178 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.647266 sshd[6178]: Connection closed by 10.200.16.10 port 33342 Dec 16 13:12:38.647481 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:38.646000 audit[6174]: USER_END pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.651366 systemd[1]: sshd@9-10.200.8.11:22-10.200.16.10:33342.service: Deactivated successfully. Dec 16 13:12:38.653038 systemd-logind[2540]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:12:38.653950 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:12:38.655978 systemd-logind[2540]: Removed session 13. Dec 16 13:12:38.646000 audit[6174]: CRED_DISP pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.663976 kernel: audit: type=1106 audit(1765890758.646:794): pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.664043 kernel: audit: type=1104 audit(1765890758.646:795): pid=6174 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:38.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:38.759264 systemd[1]: Started sshd@10-10.200.8.11:22-10.200.16.10:33350.service - OpenSSH per-connection server daemon (10.200.16.10:33350). Dec 16 13:12:38.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.11:22-10.200.16.10:33350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:39.294000 audit[6191]: USER_ACCT pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.295989 sshd[6191]: Accepted publickey for core from 10.200.16.10 port 33350 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:39.295000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.295000 audit[6191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdce111720 a2=3 a3=0 items=0 ppid=1 pid=6191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:39.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:39.297646 sshd-session[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:39.301684 systemd-logind[2540]: New session 14 of user core. Dec 16 13:12:39.309015 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:12:39.310000 audit[6191]: USER_START pid=6191 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.312000 audit[6197]: CRED_ACQ pid=6197 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.715564 sshd[6197]: Connection closed by 10.200.16.10 port 33350 Dec 16 13:12:39.715980 sshd-session[6191]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:39.715000 audit[6191]: USER_END pid=6191 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.715000 audit[6191]: CRED_DISP pid=6191 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:39.719162 systemd[1]: sshd@10-10.200.8.11:22-10.200.16.10:33350.service: Deactivated successfully. Dec 16 13:12:39.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.11:22-10.200.16.10:33350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:39.720744 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:12:39.721406 systemd-logind[2540]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:12:39.722540 systemd-logind[2540]: Removed session 14. Dec 16 13:12:39.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.11:22-10.200.16.10:33366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:39.827381 systemd[1]: Started sshd@11-10.200.8.11:22-10.200.16.10:33366.service - OpenSSH per-connection server daemon (10.200.16.10:33366). Dec 16 13:12:40.357000 audit[6207]: USER_ACCT pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.359570 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 33366 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:40.358000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.358000 audit[6207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9bfc87a0 a2=3 a3=0 items=0 ppid=1 pid=6207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:40.358000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:40.361298 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:40.365569 systemd-logind[2540]: New session 15 of user core. Dec 16 13:12:40.369859 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:12:40.370000 audit[6207]: USER_START pid=6207 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.371000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.717034 sshd[6211]: Connection closed by 10.200.16.10 port 33366 Dec 16 13:12:40.718832 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:40.718000 audit[6207]: USER_END pid=6207 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.718000 audit[6207]: CRED_DISP pid=6207 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:40.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.11:22-10.200.16.10:33366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:40.723178 systemd[1]: sshd@11-10.200.8.11:22-10.200.16.10:33366.service: Deactivated successfully. Dec 16 13:12:40.725225 systemd-logind[2540]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:12:40.727598 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:12:40.731119 systemd-logind[2540]: Removed session 15. Dec 16 13:12:40.977442 kubelet[3986]: E1216 13:12:40.977205 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:12:43.975399 containerd[2582]: time="2025-12-16T13:12:43.975356965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:12:44.225200 containerd[2582]: time="2025-12-16T13:12:44.225158660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:44.228362 containerd[2582]: time="2025-12-16T13:12:44.227854986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:12:44.228502 containerd[2582]: time="2025-12-16T13:12:44.227995108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:44.228586 kubelet[3986]: E1216 13:12:44.228556 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:12:44.228876 kubelet[3986]: E1216 13:12:44.228594 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:12:44.228876 kubelet[3986]: E1216 13:12:44.228692 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:790b37891fd14fdf909b39bfc1c2dcaf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:44.230796 containerd[2582]: time="2025-12-16T13:12:44.230769836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:12:44.489362 containerd[2582]: time="2025-12-16T13:12:44.488996390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:44.491815 containerd[2582]: time="2025-12-16T13:12:44.491756758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:12:44.492014 containerd[2582]: time="2025-12-16T13:12:44.491788899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:44.492149 kubelet[3986]: E1216 13:12:44.492106 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:12:44.492202 kubelet[3986]: E1216 13:12:44.492155 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:12:44.492274 kubelet[3986]: E1216 13:12:44.492249 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqxfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fc5cb4685-dj4s5_calico-system(58631ddf-c520-4b8f-9eb4-5eeeca2898ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:44.493657 kubelet[3986]: E1216 13:12:44.493611 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:12:44.975610 containerd[2582]: time="2025-12-16T13:12:44.975412745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:12:45.226196 containerd[2582]: time="2025-12-16T13:12:45.226117101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:45.229601 containerd[2582]: time="2025-12-16T13:12:45.229519109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:12:45.229601 containerd[2582]: time="2025-12-16T13:12:45.229572684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:45.229932 kubelet[3986]: E1216 13:12:45.229650 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:12:45.229932 kubelet[3986]: E1216 13:12:45.229680 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:12:45.229932 kubelet[3986]: E1216 13:12:45.229801 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzznm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7sznl_calico-system(c9d340d2-955d-4739-b7cb-fc9a188575e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:45.231379 kubelet[3986]: E1216 13:12:45.231337 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:12:45.830950 systemd[1]: Started sshd@12-10.200.8.11:22-10.200.16.10:35362.service - OpenSSH per-connection server daemon (10.200.16.10:35362). Dec 16 13:12:45.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.11:22-10.200.16.10:35362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:45.832139 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:12:45.832201 kernel: audit: type=1130 audit(1765890765.830:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.11:22-10.200.16.10:35362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:46.378000 audit[6233]: USER_ACCT pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.385212 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 35362 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:46.385745 kernel: audit: type=1101 audit(1765890766.378:816): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.385000 audit[6233]: CRED_ACQ pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.386742 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:46.393899 kernel: audit: type=1103 audit(1765890766.385:817): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.393967 kernel: audit: type=1006 audit(1765890766.385:818): pid=6233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 13:12:46.399739 kernel: audit: type=1300 audit(1765890766.385:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3904c520 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:46.385000 audit[6233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3904c520 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:46.397431 systemd-logind[2540]: New session 16 of user core. Dec 16 13:12:46.385000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:46.402284 kernel: audit: type=1327 audit(1765890766.385:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:46.406845 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:12:46.408000 audit[6233]: USER_START pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.417000 audit[6237]: CRED_ACQ pid=6237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.422510 kernel: audit: type=1105 audit(1765890766.408:819): pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.422611 kernel: audit: type=1103 audit(1765890766.417:820): pid=6237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.730638 sshd[6237]: Connection closed by 10.200.16.10 port 35362 Dec 16 13:12:46.731835 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:46.731000 audit[6233]: USER_END pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.737476 systemd[1]: sshd@12-10.200.8.11:22-10.200.16.10:35362.service: Deactivated successfully. Dec 16 13:12:46.743099 kernel: audit: type=1106 audit(1765890766.731:821): pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.743169 kernel: audit: type=1104 audit(1765890766.731:822): pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.731000 audit[6233]: CRED_DISP pid=6233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:46.741525 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:12:46.742583 systemd-logind[2540]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:12:46.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.11:22-10.200.16.10:35362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:46.744380 systemd-logind[2540]: Removed session 16. Dec 16 13:12:46.975343 kubelet[3986]: E1216 13:12:46.975315 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:12:46.976877 kubelet[3986]: E1216 13:12:46.976837 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:12:47.975051 kubelet[3986]: E1216 13:12:47.975017 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:12:51.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.11:22-10.200.16.10:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:51.842599 systemd[1]: Started sshd@13-10.200.8.11:22-10.200.16.10:55688.service - OpenSSH per-connection server daemon (10.200.16.10:55688). Dec 16 13:12:51.844308 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:12:51.844339 kernel: audit: type=1130 audit(1765890771.841:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.11:22-10.200.16.10:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:51.975170 containerd[2582]: time="2025-12-16T13:12:51.975132497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:12:52.214963 containerd[2582]: time="2025-12-16T13:12:52.214800531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:12:52.220199 containerd[2582]: time="2025-12-16T13:12:52.218442900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:12:52.220199 containerd[2582]: time="2025-12-16T13:12:52.218484196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:12:52.220377 kubelet[3986]: E1216 13:12:52.218581 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:52.220377 kubelet[3986]: E1216 13:12:52.218621 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:12:52.220377 kubelet[3986]: E1216 13:12:52.219942 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ftln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-8hwmz_calico-apiserver(02384784-68b8-42d3-aba8-a97ba2d37c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:12:52.221120 kubelet[3986]: E1216 13:12:52.221079 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:12:52.400000 audit[6278]: USER_ACCT pid=6278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.402981 sshd-session[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:52.408165 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 55688 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:52.408103 systemd-logind[2540]: New session 17 of user core. Dec 16 13:12:52.409498 kernel: audit: type=1101 audit(1765890772.400:825): pid=6278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.401000 audit[6278]: CRED_ACQ pid=6278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.416117 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:12:52.419987 kernel: audit: type=1103 audit(1765890772.401:826): pid=6278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.420041 kernel: audit: type=1006 audit(1765890772.401:827): pid=6278 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 13:12:52.401000 audit[6278]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9eb2ddb0 a2=3 a3=0 items=0 ppid=1 pid=6278 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:52.425057 kernel: audit: type=1300 audit(1765890772.401:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9eb2ddb0 a2=3 a3=0 items=0 ppid=1 pid=6278 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:52.401000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:52.420000 audit[6278]: USER_START pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.433645 kernel: audit: type=1327 audit(1765890772.401:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:52.433719 kernel: audit: type=1105 audit(1765890772.420:828): pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.425000 audit[6282]: CRED_ACQ pid=6282 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.444144 kernel: audit: type=1103 audit(1765890772.425:829): pid=6282 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.755605 sshd[6282]: Connection closed by 10.200.16.10 port 55688 Dec 16 13:12:52.755970 sshd-session[6278]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:52.756000 audit[6278]: USER_END pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.758976 systemd-logind[2540]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:12:52.759528 systemd[1]: sshd@13-10.200.8.11:22-10.200.16.10:55688.service: Deactivated successfully. Dec 16 13:12:52.761204 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:12:52.764816 systemd-logind[2540]: Removed session 17. Dec 16 13:12:52.767197 kernel: audit: type=1106 audit(1765890772.756:830): pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.767395 kernel: audit: type=1104 audit(1765890772.756:831): pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.756000 audit[6278]: CRED_DISP pid=6278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:52.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.11:22-10.200.16.10:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:57.866607 systemd[1]: Started sshd@14-10.200.8.11:22-10.200.16.10:55696.service - OpenSSH per-connection server daemon (10.200.16.10:55696). Dec 16 13:12:57.873145 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:12:57.873185 kernel: audit: type=1130 audit(1765890777.866:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.11:22-10.200.16.10:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:57.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.11:22-10.200.16.10:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:57.976201 kubelet[3986]: E1216 13:12:57.976173 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:12:57.976881 kubelet[3986]: E1216 13:12:57.976852 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:12:58.409783 kernel: audit: type=1101 audit(1765890778.403:834): pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.403000 audit[6316]: USER_ACCT pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.407920 sshd-session[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:58.410199 sshd[6316]: Accepted publickey for core from 10.200.16.10 port 55696 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:58.406000 audit[6316]: CRED_ACQ pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.421722 kernel: audit: type=1103 audit(1765890778.406:835): pid=6316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.421798 kernel: audit: type=1006 audit(1765890778.406:836): pid=6316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 13:12:58.429963 kernel: audit: type=1300 audit(1765890778.406:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe207d15f0 a2=3 a3=0 items=0 ppid=1 pid=6316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:58.406000 audit[6316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe207d15f0 a2=3 a3=0 items=0 ppid=1 pid=6316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:58.433207 kernel: audit: type=1327 audit(1765890778.406:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:58.406000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:58.437716 systemd-logind[2540]: New session 18 of user core. Dec 16 13:12:58.441080 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:12:58.453117 kernel: audit: type=1105 audit(1765890778.442:837): pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.442000 audit[6316]: USER_START pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.454000 audit[6320]: CRED_ACQ pid=6320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.461735 kernel: audit: type=1103 audit(1765890778.454:838): pid=6320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.786766 sshd[6320]: Connection closed by 10.200.16.10 port 55696 Dec 16 13:12:58.788006 sshd-session[6316]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:58.788000 audit[6316]: USER_END pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.792573 systemd[1]: sshd@14-10.200.8.11:22-10.200.16.10:55696.service: Deactivated successfully. Dec 16 13:12:58.794153 systemd-logind[2540]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:12:58.795673 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:12:58.797730 kernel: audit: type=1106 audit(1765890778.788:839): pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.801770 systemd-logind[2540]: Removed session 18. Dec 16 13:12:58.788000 audit[6316]: CRED_DISP pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.11:22-10.200.16.10:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:58.808712 kernel: audit: type=1104 audit(1765890778.788:840): pid=6316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:58.896455 systemd[1]: Started sshd@15-10.200.8.11:22-10.200.16.10:55706.service - OpenSSH per-connection server daemon (10.200.16.10:55706). Dec 16 13:12:58.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.11:22-10.200.16.10:55706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:59.432000 audit[6333]: USER_ACCT pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.433368 sshd[6333]: Accepted publickey for core from 10.200.16.10 port 55706 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:12:59.433000 audit[6333]: CRED_ACQ pid=6333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.433000 audit[6333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec9a7bd70 a2=3 a3=0 items=0 ppid=1 pid=6333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:12:59.433000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:12:59.434618 sshd-session[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:59.439191 systemd-logind[2540]: New session 19 of user core. Dec 16 13:12:59.444836 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:12:59.446000 audit[6333]: USER_START pid=6333 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.447000 audit[6337]: CRED_ACQ pid=6337 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.863031 sshd[6337]: Connection closed by 10.200.16.10 port 55706 Dec 16 13:12:59.864844 sshd-session[6333]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:59.865000 audit[6333]: USER_END pid=6333 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.865000 audit[6333]: CRED_DISP pid=6333 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:12:59.867745 systemd[1]: sshd@15-10.200.8.11:22-10.200.16.10:55706.service: Deactivated successfully. Dec 16 13:12:59.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.11:22-10.200.16.10:55706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:12:59.870103 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:12:59.871114 systemd-logind[2540]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:12:59.873631 systemd-logind[2540]: Removed session 19. Dec 16 13:12:59.976970 systemd[1]: Started sshd@16-10.200.8.11:22-10.200.16.10:55708.service - OpenSSH per-connection server daemon (10.200.16.10:55708). Dec 16 13:12:59.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.11:22-10.200.16.10:55708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:00.538000 audit[6346]: USER_ACCT pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:00.540759 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 55708 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:00.539000 audit[6346]: CRED_ACQ pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:00.539000 audit[6346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe87007210 a2=3 a3=0 items=0 ppid=1 pid=6346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:00.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:00.542216 sshd-session[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:00.546637 systemd-logind[2540]: New session 20 of user core. Dec 16 13:13:00.551904 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:13:00.552000 audit[6346]: USER_START pid=6346 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:00.553000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:01.252000 audit[6360]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:01.252000 audit[6360]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd86b9b950 a2=0 a3=7ffd86b9b93c items=0 ppid=4121 pid=6360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:01.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:01.256000 audit[6360]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:01.256000 audit[6360]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd86b9b950 a2=0 a3=0 items=0 ppid=4121 pid=6360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:01.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:01.280000 audit[6362]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:01.280000 audit[6362]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff5ea45310 a2=0 a3=7fff5ea452fc items=0 ppid=4121 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:01.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:01.287000 audit[6362]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:01.287000 audit[6362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5ea45310 a2=0 a3=0 items=0 ppid=4121 pid=6362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:01.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:01.366799 sshd[6350]: Connection closed by 10.200.16.10 port 55708 Dec 16 13:13:01.367203 sshd-session[6346]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:01.366000 audit[6346]: USER_END pid=6346 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:01.366000 audit[6346]: CRED_DISP pid=6346 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:01.369999 systemd[1]: sshd@16-10.200.8.11:22-10.200.16.10:55708.service: Deactivated successfully. Dec 16 13:13:01.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.11:22-10.200.16.10:55708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:01.371939 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:13:01.374005 systemd-logind[2540]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:13:01.374965 systemd-logind[2540]: Removed session 20. Dec 16 13:13:01.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.11:22-10.200.16.10:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:01.478139 systemd[1]: Started sshd@17-10.200.8.11:22-10.200.16.10:59436.service - OpenSSH per-connection server daemon (10.200.16.10:59436). Dec 16 13:13:01.976923 containerd[2582]: time="2025-12-16T13:13:01.976892012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:13:02.025026 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 59436 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:02.023000 audit[6367]: USER_ACCT pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.024000 audit[6367]: CRED_ACQ pid=6367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.025000 audit[6367]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1c54ea10 a2=3 a3=0 items=0 ppid=1 pid=6367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:02.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:02.027659 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:02.033043 systemd-logind[2540]: New session 21 of user core. Dec 16 13:13:02.039889 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:13:02.040000 audit[6367]: USER_START pid=6367 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.042000 audit[6371]: CRED_ACQ pid=6371 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.377435 containerd[2582]: time="2025-12-16T13:13:02.377400683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:13:02.380234 containerd[2582]: time="2025-12-16T13:13:02.380209113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:13:02.380293 containerd[2582]: time="2025-12-16T13:13:02.380279279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:13:02.380407 kubelet[3986]: E1216 13:13:02.380379 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:13:02.380636 kubelet[3986]: E1216 13:13:02.380416 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:13:02.380663 containerd[2582]: time="2025-12-16T13:13:02.380629010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:13:02.380963 kubelet[3986]: E1216 13:13:02.380918 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:13:02.460724 sshd[6371]: Connection closed by 10.200.16.10 port 59436 Dec 16 13:13:02.460794 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:02.461000 audit[6367]: USER_END pid=6367 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.461000 audit[6367]: CRED_DISP pid=6367 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:02.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.11:22-10.200.16.10:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:02.465856 systemd[1]: sshd@17-10.200.8.11:22-10.200.16.10:59436.service: Deactivated successfully. Dec 16 13:13:02.468427 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:13:02.471286 systemd-logind[2540]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:13:02.473276 systemd-logind[2540]: Removed session 21. Dec 16 13:13:02.574968 systemd[1]: Started sshd@18-10.200.8.11:22-10.200.16.10:59448.service - OpenSSH per-connection server daemon (10.200.16.10:59448). Dec 16 13:13:02.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.11:22-10.200.16.10:59448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:02.669743 containerd[2582]: time="2025-12-16T13:13:02.669597904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:13:02.672630 containerd[2582]: time="2025-12-16T13:13:02.672603552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:13:02.672693 containerd[2582]: time="2025-12-16T13:13:02.672662819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:13:02.672879 kubelet[3986]: E1216 13:13:02.672842 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:13:02.672944 kubelet[3986]: E1216 13:13:02.672887 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:13:02.673409 kubelet[3986]: E1216 13:13:02.673071 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59k48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5469dcf444-55tl9_calico-system(2527a715-8ea0-4d0b-a053-0e471ff72634): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:13:02.673548 containerd[2582]: time="2025-12-16T13:13:02.673358365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:13:02.674907 kubelet[3986]: E1216 13:13:02.674880 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:13:02.929416 containerd[2582]: time="2025-12-16T13:13:02.929263332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:13:02.932558 containerd[2582]: time="2025-12-16T13:13:02.932466001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:13:02.932558 containerd[2582]: time="2025-12-16T13:13:02.932532045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:13:02.932819 kubelet[3986]: E1216 13:13:02.932786 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:13:02.932928 kubelet[3986]: E1216 13:13:02.932912 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:13:02.933106 kubelet[3986]: E1216 13:13:02.933055 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tj5zh_calico-system(40ac25d7-4601-4254-b29f-0ca4ec170f77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:13:02.934573 kubelet[3986]: E1216 13:13:02.934542 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:13:02.975485 containerd[2582]: time="2025-12-16T13:13:02.975379184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:13:03.108723 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 59448 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:03.117242 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 16 13:13:03.117297 kernel: audit: type=1101 audit(1765890783.105:874): pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.105000 audit[6380]: USER_ACCT pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.109586 sshd-session[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:03.107000 audit[6380]: CRED_ACQ pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.125200 systemd-logind[2540]: New session 22 of user core. Dec 16 13:13:03.127724 kernel: audit: type=1103 audit(1765890783.107:875): pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.132891 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:13:03.134731 kernel: audit: type=1006 audit(1765890783.107:876): pid=6380 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 13:13:03.107000 audit[6380]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9c888560 a2=3 a3=0 items=0 ppid=1 pid=6380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:03.145414 kernel: audit: type=1300 audit(1765890783.107:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9c888560 a2=3 a3=0 items=0 ppid=1 pid=6380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:03.145464 kernel: audit: type=1327 audit(1765890783.107:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:03.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:03.153956 kernel: audit: type=1105 audit(1765890783.144:877): pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.144000 audit[6380]: USER_START pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.153000 audit[6384]: CRED_ACQ pid=6384 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.163720 kernel: audit: type=1103 audit(1765890783.153:878): pid=6384 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.234916 containerd[2582]: time="2025-12-16T13:13:03.234785929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:13:03.237893 containerd[2582]: time="2025-12-16T13:13:03.237786055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:13:03.238073 containerd[2582]: time="2025-12-16T13:13:03.238000121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:13:03.238165 kubelet[3986]: E1216 13:13:03.238145 3986 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:13:03.238390 kubelet[3986]: E1216 13:13:03.238237 3986 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:13:03.238390 kubelet[3986]: E1216 13:13:03.238354 3986 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dljlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f6997d77-z274w_calico-apiserver(f055798a-e699-42ea-8c24-f7896c6361d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:13:03.239676 kubelet[3986]: E1216 13:13:03.239651 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:13:03.480770 sshd[6384]: Connection closed by 10.200.16.10 port 59448 Dec 16 13:13:03.483838 sshd-session[6380]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:03.483000 audit[6380]: USER_END pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.487830 systemd[1]: sshd@18-10.200.8.11:22-10.200.16.10:59448.service: Deactivated successfully. Dec 16 13:13:03.487937 systemd-logind[2540]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:13:03.492722 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:13:03.494501 kernel: audit: type=1106 audit(1765890783.483:879): pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.495756 systemd-logind[2540]: Removed session 22. Dec 16 13:13:03.483000 audit[6380]: CRED_DISP pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.503252 kernel: audit: type=1104 audit(1765890783.483:880): pid=6380 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:03.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.11:22-10.200.16.10:59448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:03.511719 kernel: audit: type=1131 audit(1765890783.488:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.11:22-10.200.16.10:59448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:04.978732 kubelet[3986]: E1216 13:13:04.977355 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:13:05.121000 audit[6396]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:05.121000 audit[6396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc2f244030 a2=0 a3=7ffc2f24401c items=0 ppid=4121 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:05.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:05.126000 audit[6396]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:13:05.126000 audit[6396]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc2f244030 a2=0 a3=7ffc2f24401c items=0 ppid=4121 pid=6396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:05.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:13:08.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.11:22-10.200.16.10:59452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:08.592549 systemd[1]: Started sshd@19-10.200.8.11:22-10.200.16.10:59452.service - OpenSSH per-connection server daemon (10.200.16.10:59452). Dec 16 13:13:08.598789 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:13:08.598853 kernel: audit: type=1130 audit(1765890788.591:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.11:22-10.200.16.10:59452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:09.138000 audit[6398]: USER_ACCT pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.143226 sshd-session[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:09.143765 sshd[6398]: Accepted publickey for core from 10.200.16.10 port 59452 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:09.141000 audit[6398]: CRED_ACQ pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.146658 kernel: audit: type=1101 audit(1765890789.138:885): pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.146881 kernel: audit: type=1103 audit(1765890789.141:886): pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.149959 kernel: audit: type=1006 audit(1765890789.141:887): pid=6398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 13:13:09.141000 audit[6398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff656c5b00 a2=3 a3=0 items=0 ppid=1 pid=6398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:09.153811 kernel: audit: type=1300 audit(1765890789.141:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff656c5b00 a2=3 a3=0 items=0 ppid=1 pid=6398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:09.154937 systemd-logind[2540]: New session 23 of user core. Dec 16 13:13:09.156850 kernel: audit: type=1327 audit(1765890789.141:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:09.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:09.160992 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:13:09.162000 audit[6398]: USER_START pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.168730 kernel: audit: type=1105 audit(1765890789.162:888): pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.168777 kernel: audit: type=1103 audit(1765890789.167:889): pid=6402 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.167000 audit[6402]: CRED_ACQ pid=6402 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.486413 sshd[6402]: Connection closed by 10.200.16.10 port 59452 Dec 16 13:13:09.487835 sshd-session[6398]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:09.490000 audit[6398]: USER_END pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.497722 kernel: audit: type=1106 audit(1765890789.490:890): pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.498966 systemd[1]: sshd@19-10.200.8.11:22-10.200.16.10:59452.service: Deactivated successfully. Dec 16 13:13:09.490000 audit[6398]: CRED_DISP pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.505720 kernel: audit: type=1104 audit(1765890789.490:891): pid=6398 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:09.505803 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:13:09.507147 systemd-logind[2540]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:13:09.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.11:22-10.200.16.10:59452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:09.510231 systemd-logind[2540]: Removed session 23. Dec 16 13:13:11.974835 kubelet[3986]: E1216 13:13:11.974790 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:13:12.979453 kubelet[3986]: E1216 13:13:12.979401 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:13:13.974772 kubelet[3986]: E1216 13:13:13.974724 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:13:14.608500 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:13:14.608580 kernel: audit: type=1130 audit(1765890794.602:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.11:22-10.200.16.10:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:14.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.11:22-10.200.16.10:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:14.602359 systemd[1]: Started sshd@20-10.200.8.11:22-10.200.16.10:58662.service - OpenSSH per-connection server daemon (10.200.16.10:58662). Dec 16 13:13:14.976889 kubelet[3986]: E1216 13:13:14.976537 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:13:15.162000 audit[6414]: USER_ACCT pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.163727 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 58662 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:15.165264 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:15.162000 audit[6414]: CRED_ACQ pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.173886 kernel: audit: type=1101 audit(1765890795.162:894): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.173945 kernel: audit: type=1103 audit(1765890795.162:895): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.173591 systemd-logind[2540]: New session 24 of user core. Dec 16 13:13:15.175915 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:13:15.178087 kernel: audit: type=1006 audit(1765890795.162:896): pid=6414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 13:13:15.162000 audit[6414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef0e59b50 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:15.162000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:15.186440 kernel: audit: type=1300 audit(1765890795.162:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef0e59b50 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:15.186484 kernel: audit: type=1327 audit(1765890795.162:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:15.178000 audit[6414]: USER_START pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.191402 kernel: audit: type=1105 audit(1765890795.178:897): pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.178000 audit[6418]: CRED_ACQ pid=6418 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.196874 kernel: audit: type=1103 audit(1765890795.178:898): pid=6418 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.514722 sshd[6418]: Connection closed by 10.200.16.10 port 58662 Dec 16 13:13:15.515850 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:15.516000 audit[6414]: USER_END pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.519329 systemd-logind[2540]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:13:15.519864 systemd[1]: sshd@20-10.200.8.11:22-10.200.16.10:58662.service: Deactivated successfully. Dec 16 13:13:15.522351 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:13:15.525738 kernel: audit: type=1106 audit(1765890795.516:899): pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.525355 systemd-logind[2540]: Removed session 24. Dec 16 13:13:15.516000 audit[6414]: CRED_DISP pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.532715 kernel: audit: type=1104 audit(1765890795.516:900): pid=6414 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:15.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.11:22-10.200.16.10:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:15.974579 kubelet[3986]: E1216 13:13:15.974537 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:13:16.975671 kubelet[3986]: E1216 13:13:16.975626 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:13:20.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.11:22-10.200.16.10:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:20.633964 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:13:20.633996 kernel: audit: type=1130 audit(1765890800.632:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.11:22-10.200.16.10:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:20.632960 systemd[1]: Started sshd@21-10.200.8.11:22-10.200.16.10:45208.service - OpenSSH per-connection server daemon (10.200.16.10:45208). Dec 16 13:13:21.180000 audit[6433]: USER_ACCT pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.181182 sshd[6433]: Accepted publickey for core from 10.200.16.10 port 45208 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:21.184812 sshd-session[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:21.183000 audit[6433]: CRED_ACQ pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.188092 kernel: audit: type=1101 audit(1765890801.180:903): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.188142 kernel: audit: type=1103 audit(1765890801.183:904): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.189786 kernel: audit: type=1006 audit(1765890801.183:905): pid=6433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 13:13:21.183000 audit[6433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0499590 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:21.196344 kernel: audit: type=1300 audit(1765890801.183:905): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0499590 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:21.183000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:21.197840 kernel: audit: type=1327 audit(1765890801.183:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:21.197393 systemd-logind[2540]: New session 25 of user core. Dec 16 13:13:21.205900 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:13:21.207000 audit[6433]: USER_START pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.214000 audit[6460]: CRED_ACQ pid=6460 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.218451 kernel: audit: type=1105 audit(1765890801.207:906): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.218493 kernel: audit: type=1103 audit(1765890801.214:907): pid=6460 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.529885 sshd[6460]: Connection closed by 10.200.16.10 port 45208 Dec 16 13:13:21.530200 sshd-session[6433]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:21.530000 audit[6433]: USER_END pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.533734 systemd[1]: sshd@21-10.200.8.11:22-10.200.16.10:45208.service: Deactivated successfully. Dec 16 13:13:21.536287 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:13:21.531000 audit[6433]: CRED_DISP pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.539404 systemd-logind[2540]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:13:21.540264 systemd-logind[2540]: Removed session 25. Dec 16 13:13:21.542890 kernel: audit: type=1106 audit(1765890801.530:908): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.542935 kernel: audit: type=1104 audit(1765890801.531:909): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:21.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.11:22-10.200.16.10:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:22.976285 kubelet[3986]: E1216 13:13:22.976249 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3" Dec 16 13:13:26.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.11:22-10.200.16.10:45222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:26.647566 systemd[1]: Started sshd@22-10.200.8.11:22-10.200.16.10:45222.service - OpenSSH per-connection server daemon (10.200.16.10:45222). Dec 16 13:13:26.654948 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:13:26.655001 kernel: audit: type=1130 audit(1765890806.645:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.11:22-10.200.16.10:45222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:26.976112 kubelet[3986]: E1216 13:13:26.975882 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fc5cb4685-dj4s5" podUID="58631ddf-c520-4b8f-9eb4-5eeeca2898ef" Dec 16 13:13:27.194000 audit[6472]: USER_ACCT pid=6472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.202756 kernel: audit: type=1101 audit(1765890807.194:912): pid=6472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.203580 sshd[6472]: Accepted publickey for core from 10.200.16.10 port 45222 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:27.205261 sshd-session[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:27.213727 kernel: audit: type=1103 audit(1765890807.202:913): pid=6472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.202000 audit[6472]: CRED_ACQ pid=6472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.218831 kernel: audit: type=1006 audit(1765890807.202:914): pid=6472 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 13:13:27.221687 systemd-logind[2540]: New session 26 of user core. Dec 16 13:13:27.202000 audit[6472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc44c18c0 a2=3 a3=0 items=0 ppid=1 pid=6472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:27.202000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:27.231476 kernel: audit: type=1300 audit(1765890807.202:914): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc44c18c0 a2=3 a3=0 items=0 ppid=1 pid=6472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:27.231540 kernel: audit: type=1327 audit(1765890807.202:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:27.232360 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:13:27.233000 audit[6472]: USER_START pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.244717 kernel: audit: type=1105 audit(1765890807.233:915): pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.243000 audit[6476]: CRED_ACQ pid=6476 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.251722 kernel: audit: type=1103 audit(1765890807.243:916): pid=6476 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.613442 sshd[6476]: Connection closed by 10.200.16.10 port 45222 Dec 16 13:13:27.615369 sshd-session[6472]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:27.615000 audit[6472]: USER_END pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.625730 kernel: audit: type=1106 audit(1765890807.615:917): pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.625813 kernel: audit: type=1104 audit(1765890807.621:918): pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.621000 audit[6472]: CRED_DISP pid=6472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:27.631062 systemd[1]: sshd@22-10.200.8.11:22-10.200.16.10:45222.service: Deactivated successfully. Dec 16 13:13:27.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.11:22-10.200.16.10:45222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:27.634791 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:13:27.636758 systemd-logind[2540]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:13:27.641105 systemd-logind[2540]: Removed session 26. Dec 16 13:13:27.975357 kubelet[3986]: E1216 13:13:27.975089 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-z274w" podUID="f055798a-e699-42ea-8c24-f7896c6361d5" Dec 16 13:13:27.975576 kubelet[3986]: E1216 13:13:27.975557 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f6997d77-8hwmz" podUID="02384784-68b8-42d3-aba8-a97ba2d37c12" Dec 16 13:13:27.976632 kubelet[3986]: E1216 13:13:27.976572 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tj5zh" podUID="40ac25d7-4601-4254-b29f-0ca4ec170f77" Dec 16 13:13:28.978720 kubelet[3986]: E1216 13:13:28.978591 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5469dcf444-55tl9" podUID="2527a715-8ea0-4d0b-a053-0e471ff72634" Dec 16 13:13:32.727694 systemd[1]: Started sshd@23-10.200.8.11:22-10.200.16.10:43988.service - OpenSSH per-connection server daemon (10.200.16.10:43988). Dec 16 13:13:32.733808 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:13:32.733880 kernel: audit: type=1130 audit(1765890812.727:920): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.11:22-10.200.16.10:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:32.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.11:22-10.200.16.10:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:33.263000 audit[6488]: USER_ACCT pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.272322 kernel: audit: type=1101 audit(1765890813.263:921): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.272399 sshd[6488]: Accepted publickey for core from 10.200.16.10 port 43988 ssh2: RSA SHA256:KMwMjU/U2qe8v0aXrhdsRzZ/mvx4g3yqcna6qEXd5EQ Dec 16 13:13:33.273658 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:33.272000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.283531 kernel: audit: type=1103 audit(1765890813.272:922): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.283586 kernel: audit: type=1006 audit(1765890813.272:923): pid=6488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 13:13:33.272000 audit[6488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0fe066c0 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:33.285441 systemd-logind[2540]: New session 27 of user core. Dec 16 13:13:33.289433 kernel: audit: type=1300 audit(1765890813.272:923): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0fe066c0 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:13:33.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:33.292156 kernel: audit: type=1327 audit(1765890813.272:923): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:13:33.292926 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:13:33.294000 audit[6488]: USER_START pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.294000 audit[6492]: CRED_ACQ pid=6492 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.304530 kernel: audit: type=1105 audit(1765890813.294:924): pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.304614 kernel: audit: type=1103 audit(1765890813.294:925): pid=6492 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.637450 sshd[6492]: Connection closed by 10.200.16.10 port 43988 Dec 16 13:13:33.638829 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:33.639000 audit[6488]: USER_END pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.649719 kernel: audit: type=1106 audit(1765890813.639:926): pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.651661 systemd[1]: sshd@23-10.200.8.11:22-10.200.16.10:43988.service: Deactivated successfully. Dec 16 13:13:33.653343 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:13:33.653932 systemd-logind[2540]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:13:33.639000 audit[6488]: CRED_DISP pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.658133 systemd-logind[2540]: Removed session 27. Dec 16 13:13:33.665724 kernel: audit: type=1104 audit(1765890813.639:927): pid=6488 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:13:33.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.11:22-10.200.16.10:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:13:34.975221 kubelet[3986]: E1216 13:13:34.974846 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7sznl" podUID="c9d340d2-955d-4739-b7cb-fc9a188575e3"