Jan 14 01:17:10.046962 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:17:10.046983 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:17:10.046993 kernel: BIOS-provided physical RAM map: Jan 14 01:17:10.046998 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:17:10.047003 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 01:17:10.047007 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 14 01:17:10.047013 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 14 01:17:10.047018 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 14 01:17:10.047023 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 14 01:17:10.047029 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 01:17:10.047034 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 01:17:10.047039 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 01:17:10.047043 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 01:17:10.047048 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 14 01:17:10.047054 kernel: NX (Execute Disable) protection: active Jan 14 01:17:10.047061 kernel: APIC: Static calls initialized Jan 14 01:17:10.047066 kernel: efi: EFI v2.7 by Microsoft Jan 14 01:17:10.047071 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e99f698 RNG=0x3ffd2018 Jan 14 01:17:10.047076 kernel: random: crng init done Jan 14 01:17:10.047082 kernel: secureboot: Secure boot disabled Jan 14 01:17:10.047087 kernel: SMBIOS 3.1.0 present. Jan 14 01:17:10.047092 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 14 01:17:10.047097 kernel: DMI: Memory slots populated: 2/2 Jan 14 01:17:10.047102 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 01:17:10.047107 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 14 01:17:10.047113 kernel: Hyper-V: Nested features: 0x3e0101 Jan 14 01:17:10.047118 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 01:17:10.047123 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 01:17:10.047128 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:17:10.047134 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:17:10.047139 kernel: tsc: Detected 2299.999 MHz processor Jan 14 01:17:10.047144 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:17:10.047150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:17:10.047156 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 14 01:17:10.047163 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:17:10.047169 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:17:10.047174 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 14 01:17:10.047180 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 14 01:17:10.047185 kernel: Using GB pages for direct mapping Jan 14 01:17:10.047190 kernel: ACPI: Early table checksum verification disabled Jan 14 01:17:10.047200 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 01:17:10.047206 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047211 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047217 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 01:17:10.047222 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 01:17:10.047228 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047235 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047241 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047246 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:17:10.047252 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:17:10.047257 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:17:10.047263 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 01:17:10.047270 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 14 01:17:10.047276 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 01:17:10.047282 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 01:17:10.047287 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 01:17:10.047293 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 01:17:10.047298 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 01:17:10.047304 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 14 01:17:10.047311 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 01:17:10.047317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 14 01:17:10.047323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 14 01:17:10.047329 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 14 01:17:10.047334 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 14 01:17:10.047340 kernel: Zone ranges: Jan 14 01:17:10.047346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:17:10.047353 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 01:17:10.047359 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:17:10.047364 kernel: Device empty Jan 14 01:17:10.047370 kernel: Movable zone start for each node Jan 14 01:17:10.047375 kernel: Early memory node ranges Jan 14 01:17:10.047381 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:17:10.047387 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 14 01:17:10.047394 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 14 01:17:10.047400 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 01:17:10.047405 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:17:10.047411 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 01:17:10.047416 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:17:10.047422 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:17:10.047428 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 14 01:17:10.047435 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 14 01:17:10.047441 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 01:17:10.047446 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 01:17:10.047452 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:17:10.047457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:17:10.047463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:17:10.047469 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 01:17:10.047474 kernel: TSC deadline timer available Jan 14 01:17:10.047481 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:17:10.047487 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:17:10.047492 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:17:10.047498 kernel: CPU topo: Max. threads per core: 2 Jan 14 01:17:10.047504 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:17:10.047509 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:17:10.047515 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:17:10.047522 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 01:17:10.047528 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 01:17:10.047533 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:17:10.047539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:17:10.047545 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:17:10.047550 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:17:10.047556 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:17:10.047563 kernel: Hyper-V: PV spinlocks enabled Jan 14 01:17:10.047569 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:17:10.047575 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:17:10.047581 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 01:17:10.047587 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:17:10.047592 kernel: Fallback order for Node 0: 0 Jan 14 01:17:10.047599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 14 01:17:10.047605 kernel: Policy zone: Normal Jan 14 01:17:10.047611 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:17:10.047617 kernel: software IO TLB: area num 2. Jan 14 01:17:10.047622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:17:10.047628 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:17:10.047649 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:17:10.047658 kernel: Dynamic Preempt: voluntary Jan 14 01:17:10.047666 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:17:10.047673 kernel: rcu: RCU event tracing is enabled. Jan 14 01:17:10.047684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:17:10.047692 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:17:10.047698 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:17:10.047705 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:17:10.047711 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:17:10.047717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:17:10.047723 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:17:10.047730 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:17:10.047737 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:17:10.047743 kernel: Using NULL legacy PIC Jan 14 01:17:10.047750 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 01:17:10.047757 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:17:10.047763 kernel: Console: colour dummy device 80x25 Jan 14 01:17:10.047770 kernel: printk: legacy console [tty1] enabled Jan 14 01:17:10.047776 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:17:10.047782 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 14 01:17:10.047788 kernel: ACPI: Core revision 20240827 Jan 14 01:17:10.047794 kernel: Failed to register legacy timer interrupt Jan 14 01:17:10.047802 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:17:10.047808 kernel: x2apic enabled Jan 14 01:17:10.047814 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:17:10.047820 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 14 01:17:10.047826 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 01:17:10.047832 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 14 01:17:10.047838 kernel: Hyper-V: Using IPI hypercalls Jan 14 01:17:10.047846 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 01:17:10.047852 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 01:17:10.047858 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 01:17:10.047864 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 01:17:10.047870 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 01:17:10.047876 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 01:17:10.047882 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 14 01:17:10.047890 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jan 14 01:17:10.047896 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:17:10.047902 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 01:17:10.047908 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 01:17:10.047914 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:17:10.047920 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:17:10.047925 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:17:10.047932 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 01:17:10.047939 kernel: RETBleed: Vulnerable Jan 14 01:17:10.047945 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:17:10.047950 kernel: active return thunk: its_return_thunk Jan 14 01:17:10.047956 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:17:10.047962 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:17:10.047968 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:17:10.047973 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:17:10.047980 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 01:17:10.047985 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 01:17:10.047991 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 01:17:10.047998 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 14 01:17:10.048004 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 14 01:17:10.048010 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 14 01:17:10.048016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:17:10.048022 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 01:17:10.048028 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 01:17:10.048034 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 01:17:10.048039 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 14 01:17:10.048045 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 14 01:17:10.048051 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 14 01:17:10.048057 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 14 01:17:10.048064 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:17:10.048070 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:17:10.048076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:17:10.048081 kernel: landlock: Up and running. Jan 14 01:17:10.048087 kernel: SELinux: Initializing. Jan 14 01:17:10.048093 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:17:10.048099 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:17:10.048105 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 14 01:17:10.048111 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 14 01:17:10.048117 kernel: signal: max sigframe size: 11952 Jan 14 01:17:10.048124 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:17:10.048130 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:17:10.048137 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:17:10.048143 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:17:10.048149 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:17:10.048155 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:17:10.048161 kernel: .... node #0, CPUs: #1 Jan 14 01:17:10.048168 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:17:10.048174 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 14 01:17:10.048181 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 14 01:17:10.048187 kernel: devtmpfs: initialized Jan 14 01:17:10.048193 kernel: x86/mm: Memory block size: 128MB Jan 14 01:17:10.048199 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 01:17:10.048205 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:17:10.048213 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:17:10.048219 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:17:10.048225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:17:10.048231 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:17:10.048237 kernel: audit: type=2000 audit(1768353424.102:1): state=initialized audit_enabled=0 res=1 Jan 14 01:17:10.048243 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:17:10.048249 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:17:10.048255 kernel: cpuidle: using governor menu Jan 14 01:17:10.048263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:17:10.048269 kernel: dca service started, version 1.12.1 Jan 14 01:17:10.048275 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 14 01:17:10.048281 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 14 01:17:10.048287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:17:10.048293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:17:10.048300 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:17:10.048306 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:17:10.048313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:17:10.048318 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:17:10.048325 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:17:10.048331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:17:10.048337 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:17:10.048343 kernel: ACPI: Interpreter enabled Jan 14 01:17:10.048350 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:17:10.048356 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:17:10.048362 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:17:10.048368 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 01:17:10.048374 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 01:17:10.048380 kernel: iommu: Default domain type: Translated Jan 14 01:17:10.048386 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:17:10.048394 kernel: efivars: Registered efivars operations Jan 14 01:17:10.048400 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:17:10.048406 kernel: PCI: System does not support PCI Jan 14 01:17:10.048412 kernel: vgaarb: loaded Jan 14 01:17:10.048418 kernel: clocksource: Switched to clocksource tsc-early Jan 14 01:17:10.048424 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:17:10.048430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:17:10.048437 kernel: pnp: PnP ACPI init Jan 14 01:17:10.048444 kernel: pnp: PnP ACPI: found 3 devices Jan 14 01:17:10.048450 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:17:10.048456 kernel: NET: Registered PF_INET protocol family Jan 14 01:17:10.048462 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:17:10.048468 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 01:17:10.048475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:17:10.048482 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:17:10.048489 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 01:17:10.048495 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 01:17:10.048501 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:17:10.048507 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:17:10.048513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:17:10.048519 kernel: NET: Registered PF_XDP protocol family Jan 14 01:17:10.048527 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:17:10.048533 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:17:10.048540 kernel: software IO TLB: mapped [mem 0x000000003a99f000-0x000000003e99f000] (64MB) Jan 14 01:17:10.048546 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 14 01:17:10.048552 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 14 01:17:10.048558 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 14 01:17:10.048564 kernel: clocksource: Switched to clocksource tsc Jan 14 01:17:10.048571 kernel: Initialise system trusted keyrings Jan 14 01:17:10.048577 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 01:17:10.048583 kernel: Key type asymmetric registered Jan 14 01:17:10.048590 kernel: Asymmetric key parser 'x509' registered Jan 14 01:17:10.048596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:17:10.048602 kernel: io scheduler mq-deadline registered Jan 14 01:17:10.048608 kernel: io scheduler kyber registered Jan 14 01:17:10.048615 kernel: io scheduler bfq registered Jan 14 01:17:10.048622 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:17:10.048628 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:17:10.048647 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:17:10.048654 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 01:17:10.048660 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:17:10.048666 kernel: i8042: PNP: No PS/2 controller found. Jan 14 01:17:10.048799 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 01:17:10.048876 kernel: rtc_cmos 00:02: setting system clock to 2026-01-14T01:17:06 UTC (1768353426) Jan 14 01:17:10.048949 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 01:17:10.048956 kernel: intel_pstate: Intel P-state driver initializing Jan 14 01:17:10.048963 kernel: efifb: probing for efifb Jan 14 01:17:10.048969 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 01:17:10.048977 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 01:17:10.048983 kernel: efifb: scrolling: redraw Jan 14 01:17:10.048989 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:17:10.048995 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:17:10.049001 kernel: fb0: EFI VGA frame buffer device Jan 14 01:17:10.049007 kernel: pstore: Using crash dump compression: deflate Jan 14 01:17:10.049013 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:17:10.049021 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:17:10.049027 kernel: Segment Routing with IPv6 Jan 14 01:17:10.049034 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:17:10.049040 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:17:10.049046 kernel: Key type dns_resolver registered Jan 14 01:17:10.049053 kernel: IPI shorthand broadcast: enabled Jan 14 01:17:10.049059 kernel: sched_clock: Marking stable (2070005052, 102628559)->(2500974690, -328341079) Jan 14 01:17:10.049065 kernel: registered taskstats version 1 Jan 14 01:17:10.049073 kernel: Loading compiled-in X.509 certificates Jan 14 01:17:10.049080 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:17:10.049086 kernel: Demotion targets for Node 0: null Jan 14 01:17:10.049092 kernel: Key type .fscrypt registered Jan 14 01:17:10.049098 kernel: Key type fscrypt-provisioning registered Jan 14 01:17:10.049104 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:17:10.049111 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:17:10.049118 kernel: ima: No architecture policies found Jan 14 01:17:10.049124 kernel: clk: Disabling unused clocks Jan 14 01:17:10.049130 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:17:10.049136 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:17:10.049142 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:17:10.049148 kernel: Run /init as init process Jan 14 01:17:10.049154 kernel: with arguments: Jan 14 01:17:10.049162 kernel: /init Jan 14 01:17:10.049168 kernel: with environment: Jan 14 01:17:10.049173 kernel: HOME=/ Jan 14 01:17:10.049179 kernel: TERM=linux Jan 14 01:17:10.049185 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 01:17:10.049192 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 01:17:10.049198 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 01:17:10.049204 kernel: PTP clock support registered Jan 14 01:17:10.049211 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 01:17:10.049217 kernel: hv_vmbus: registering driver hv_utils Jan 14 01:17:10.049223 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 01:17:10.049229 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 01:17:10.049235 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 01:17:10.049241 kernel: SCSI subsystem initialized Jan 14 01:17:10.049247 kernel: hv_vmbus: registering driver hv_pci Jan 14 01:17:10.049350 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 14 01:17:10.049433 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 14 01:17:10.049526 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 14 01:17:10.049606 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:17:10.049730 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 14 01:17:10.049823 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 14 01:17:10.049906 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:17:10.049993 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 14 01:17:10.050001 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 01:17:10.050094 kernel: scsi host0: storvsc_host_t Jan 14 01:17:10.050193 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 14 01:17:10.050201 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 01:17:10.050208 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 01:17:10.050214 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 01:17:10.050301 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 01:17:10.050309 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 01:17:10.050317 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 01:17:10.050395 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 14 01:17:10.050486 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 14 01:17:10.050551 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 01:17:10.050559 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 01:17:10.050674 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 01:17:10.050684 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:17:10.050772 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 01:17:10.050779 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:17:10.050786 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:17:10.050793 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:17:10.050799 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:17:10.050817 kernel: raid6: avx512x4 gen() 44528 MB/s Jan 14 01:17:10.050825 kernel: raid6: avx512x2 gen() 43743 MB/s Jan 14 01:17:10.050832 kernel: raid6: avx512x1 gen() 25196 MB/s Jan 14 01:17:10.050838 kernel: raid6: avx2x4 gen() 34267 MB/s Jan 14 01:17:10.050845 kernel: raid6: avx2x2 gen() 36446 MB/s Jan 14 01:17:10.050851 kernel: raid6: avx2x1 gen() 31207 MB/s Jan 14 01:17:10.050857 kernel: raid6: using algorithm avx512x4 gen() 44528 MB/s Jan 14 01:17:10.050865 kernel: raid6: .... xor() 7341 MB/s, rmw enabled Jan 14 01:17:10.050871 kernel: raid6: using avx512x2 recovery algorithm Jan 14 01:17:10.050878 kernel: xor: automatically using best checksumming function avx Jan 14 01:17:10.050884 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:17:10.050891 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (844) Jan 14 01:17:10.050897 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:17:10.050904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:17:10.050912 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:17:10.050919 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:17:10.050925 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:17:10.050931 kernel: loop: module loaded Jan 14 01:17:10.050938 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:17:10.050944 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:17:10.050952 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:17:10.050962 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:17:10.050970 systemd[1]: Detected virtualization microsoft. Jan 14 01:17:10.050977 systemd[1]: Detected architecture x86-64. Jan 14 01:17:10.050983 systemd[1]: Running in initrd. Jan 14 01:17:10.050989 systemd[1]: No hostname configured, using default hostname. Jan 14 01:17:10.050998 systemd[1]: Hostname set to . Jan 14 01:17:10.051006 systemd[1]: Initializing machine ID from random generator. Jan 14 01:17:10.051012 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:17:10.051019 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:17:10.051026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:17:10.051033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:17:10.051040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:17:10.051049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:17:10.051056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:17:10.051063 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:17:10.051071 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:17:10.051078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:17:10.051084 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:17:10.051093 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:17:10.051099 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:17:10.051106 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:17:10.051113 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:17:10.051121 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:17:10.051128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:17:10.051134 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:17:10.051141 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:17:10.051148 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:17:10.051154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:17:10.051161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:17:10.051169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:17:10.051176 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:17:10.051183 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:17:10.051190 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:17:10.051197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:17:10.051203 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:17:10.051210 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:17:10.051218 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:17:10.051225 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:17:10.051232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:17:10.051239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:10.051247 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:17:10.051254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:17:10.051261 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:17:10.051267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:17:10.051285 systemd-journald[978]: Collecting audit messages is enabled. Jan 14 01:17:10.051303 systemd-journald[978]: Journal started Jan 14 01:17:10.051320 systemd-journald[978]: Runtime Journal (/run/log/journal/4d11a0422f1f40e9ae0f8f6c47ecffce) is 8M, max 158.5M, 150.5M free. Jan 14 01:17:10.053195 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:17:10.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.057907 kernel: audit: type=1130 audit(1768353430.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.057077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:17:10.162201 systemd-tmpfiles[991]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:17:10.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.166944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:17:10.175444 kernel: audit: type=1130 audit(1768353430.166:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.178716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:17:10.191698 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:17:10.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.212234 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:17:10.216655 kernel: audit: type=1130 audit(1768353430.211:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.218120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:17:10.223652 kernel: audit: type=1130 audit(1768353430.217:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.227049 systemd-modules-load[984]: Inserted module 'br_netfilter' Jan 14 01:17:10.232513 kernel: Bridge firewalling registered Jan 14 01:17:10.232534 kernel: audit: type=1130 audit(1768353430.228:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.227999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:17:10.234775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:17:10.260709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:10.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.266754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:17:10.272347 kernel: audit: type=1130 audit(1768353430.263:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.281835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:17:10.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.287759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:17:10.285000 audit: BPF prog-id=6 op=LOAD Jan 14 01:17:10.292228 kernel: audit: type=1130 audit(1768353430.284:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.292250 kernel: audit: type=1334 audit(1768353430.285:9): prog-id=6 op=LOAD Jan 14 01:17:10.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.309841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:17:10.318356 kernel: audit: type=1130 audit(1768353430.311:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.317036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:17:10.394618 systemd-resolved[1009]: Positive Trust Anchors: Jan 14 01:17:10.394663 systemd-resolved[1009]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:17:10.394668 systemd-resolved[1009]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:17:10.394705 systemd-resolved[1009]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:17:10.434561 dracut-cmdline[1021]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:17:10.463734 systemd-resolved[1009]: Defaulting to hostname 'linux'. Jan 14 01:17:10.472975 kernel: audit: type=1130 audit(1768353430.464:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:10.464644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:17:10.465522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:17:10.691667 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:17:10.881663 kernel: iscsi: registered transport (tcp) Jan 14 01:17:10.946002 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:17:10.946072 kernel: QLogic iSCSI HBA Driver Jan 14 01:17:11.010094 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:17:11.030386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:17:11.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.038182 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:17:11.041651 kernel: audit: type=1130 audit(1768353431.034:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.073578 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:17:11.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.081041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:17:11.085756 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:17:11.114930 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:17:11.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.119000 audit: BPF prog-id=7 op=LOAD Jan 14 01:17:11.120000 audit: BPF prog-id=8 op=LOAD Jan 14 01:17:11.121352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:17:11.152335 systemd-udevd[1260]: Using default interface naming scheme 'v257'. Jan 14 01:17:11.165832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:17:11.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.173147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:17:11.188130 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:17:11.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.192000 audit: BPF prog-id=9 op=LOAD Jan 14 01:17:11.194836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:17:11.202585 dracut-pre-trigger[1346]: rd.md=0: removing MD RAID activation Jan 14 01:17:11.227045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:17:11.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.238865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:17:11.251799 systemd-networkd[1360]: lo: Link UP Jan 14 01:17:11.252012 systemd-networkd[1360]: lo: Gained carrier Jan 14 01:17:11.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.252468 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:17:11.256779 systemd[1]: Reached target network.target - Network. Jan 14 01:17:11.290630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:17:11.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.297057 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:17:11.380164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:17:11.381610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:11.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:11.385533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:11.389324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:12.220662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#228 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:17:12.239821 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 01:17:12.249498 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a7f3a87 (unnamed net_device) (uninitialized): VF slot 1 added Jan 14 01:17:12.255136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:12.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:12.262140 kernel: nvme nvme0: using unchecked data buffer Jan 14 01:17:12.262316 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:17:12.292598 systemd-networkd[1360]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:17:12.292842 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:17:12.293524 systemd-networkd[1360]: eth0: Link UP Jan 14 01:17:12.293696 systemd-networkd[1360]: eth0: Gained carrier Jan 14 01:17:12.293708 systemd-networkd[1360]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:17:12.311677 systemd-networkd[1360]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:17:12.327709 kernel: AES CTR mode by8 optimization enabled Jan 14 01:17:12.496120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 14 01:17:12.557097 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 14 01:17:13.202532 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 14 01:17:13.206785 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:17:13.281661 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 14 01:17:13.281910 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 14 01:17:13.287658 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 14 01:17:13.287879 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:17:13.291688 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 14 01:17:13.295767 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 14 01:17:13.301661 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 14 01:17:13.303928 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 14 01:17:13.319597 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:17:13.319797 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 14 01:17:13.323799 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 14 01:17:13.371394 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 14 01:17:13.383686 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 14 01:17:13.383921 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a7f3a87 eth0: VF registering: eth1 Jan 14 01:17:13.385873 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 14 01:17:13.390249 systemd-networkd[1360]: eth1: Interface name change detected, renamed to enP30832s1. Jan 14 01:17:13.392563 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 14 01:17:13.489668 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:17:13.498622 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:17:13.498858 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a7f3a87 eth0: Data path switched to VF: enP30832s1 Jan 14 01:17:13.499041 systemd-networkd[1360]: enP30832s1: Link UP Jan 14 01:17:13.499191 systemd-networkd[1360]: enP30832s1: Gained carrier Jan 14 01:17:13.574816 systemd-networkd[1360]: eth0: Gained IPv6LL Jan 14 01:17:13.656291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:17:13.767237 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:17:13.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:13.767802 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:17:13.774701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:17:13.778391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:17:13.782953 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:17:13.802208 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:17:13.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:14.505686 disk-uuid[1542]: Warning: The kernel is still using the old partition table. Jan 14 01:17:14.505686 disk-uuid[1542]: The new table will be used at the next reboot or after you Jan 14 01:17:14.505686 disk-uuid[1542]: run partprobe(8) or kpartx(8) Jan 14 01:17:14.505686 disk-uuid[1542]: The operation has completed successfully. Jan 14 01:17:14.514651 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:17:14.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:14.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:14.514788 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:17:14.520306 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:17:14.610948 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1584) Jan 14 01:17:14.610988 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:17:14.612309 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:17:14.634983 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:17:14.635024 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:17:14.635037 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:17:14.641658 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:17:14.642139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:17:14.646184 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:17:14.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.012862 ignition[1603]: Ignition 2.24.0 Jan 14 01:17:16.012875 ignition[1603]: Stage: fetch-offline Jan 14 01:17:16.024140 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 14 01:17:16.024167 kernel: audit: type=1130 audit(1768353436.017:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.015651 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:17:16.013127 ignition[1603]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:16.020975 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:17:16.013137 ignition[1603]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:16.013225 ignition[1603]: parsed url from cmdline: "" Jan 14 01:17:16.013228 ignition[1603]: no config URL provided Jan 14 01:17:16.013233 ignition[1603]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:17:16.013239 ignition[1603]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:17:16.013244 ignition[1603]: failed to fetch config: resource requires networking Jan 14 01:17:16.014434 ignition[1603]: Ignition finished successfully Jan 14 01:17:16.048305 ignition[1609]: Ignition 2.24.0 Jan 14 01:17:16.048315 ignition[1609]: Stage: fetch Jan 14 01:17:16.048564 ignition[1609]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:16.048572 ignition[1609]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:16.048685 ignition[1609]: parsed url from cmdline: "" Jan 14 01:17:16.048688 ignition[1609]: no config URL provided Jan 14 01:17:16.048694 ignition[1609]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:17:16.048700 ignition[1609]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:17:16.048722 ignition[1609]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 01:17:16.150685 ignition[1609]: GET result: OK Jan 14 01:17:16.150779 ignition[1609]: config has been read from IMDS userdata Jan 14 01:17:16.150807 ignition[1609]: parsing config with SHA512: a6af70cde41e78dbd37c6f70441840391d220983f48205ec5f1be945731972c14f27cfe2b4d927896015d399a9c9a1733138c5e75cd21566c415cc9fbbd4283e Jan 14 01:17:16.157404 unknown[1609]: fetched base config from "system" Jan 14 01:17:16.157412 unknown[1609]: fetched base config from "system" Jan 14 01:17:16.157836 ignition[1609]: fetch: fetch complete Jan 14 01:17:16.157417 unknown[1609]: fetched user config from "azure" Jan 14 01:17:16.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.157840 ignition[1609]: fetch: fetch passed Jan 14 01:17:16.172708 kernel: audit: type=1130 audit(1768353436.163:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.160532 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:17:16.157876 ignition[1609]: Ignition finished successfully Jan 14 01:17:16.168887 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:17:16.199928 ignition[1615]: Ignition 2.24.0 Jan 14 01:17:16.199939 ignition[1615]: Stage: kargs Jan 14 01:17:16.200147 ignition[1615]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:16.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.202504 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:17:16.212728 kernel: audit: type=1130 audit(1768353436.206:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.200155 ignition[1615]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:16.210205 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:17:16.200952 ignition[1615]: kargs: kargs passed Jan 14 01:17:16.200984 ignition[1615]: Ignition finished successfully Jan 14 01:17:16.232265 ignition[1621]: Ignition 2.24.0 Jan 14 01:17:16.232275 ignition[1621]: Stage: disks Jan 14 01:17:16.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.234613 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:17:16.244791 kernel: audit: type=1130 audit(1768353436.237:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.232482 ignition[1621]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:16.237927 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:17:16.232490 ignition[1621]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:16.244818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:17:16.233209 ignition[1621]: disks: disks passed Jan 14 01:17:16.233240 ignition[1621]: Ignition finished successfully Jan 14 01:17:16.253140 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:17:16.255660 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:17:16.259681 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:17:16.261943 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:17:16.350170 systemd-fsck[1629]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 14 01:17:16.355052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:17:16.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.361601 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:17:16.368809 kernel: audit: type=1130 audit(1768353436.360:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:16.699659 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:17:16.700195 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:17:16.706095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:17:16.743042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:17:16.748725 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:17:16.750753 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 01:17:16.751023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:17:16.751312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:17:16.774299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:17:16.777745 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:17:16.783660 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1638) Jan 14 01:17:16.787082 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:17:16.787169 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:17:16.811830 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:17:16.811870 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:17:16.813220 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:17:16.814709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:17:17.414230 coreos-metadata[1640]: Jan 14 01:17:17.414 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:17:17.416795 coreos-metadata[1640]: Jan 14 01:17:17.416 INFO Fetch successful Jan 14 01:17:17.419774 coreos-metadata[1640]: Jan 14 01:17:17.418 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:17:17.429225 coreos-metadata[1640]: Jan 14 01:17:17.429 INFO Fetch successful Jan 14 01:17:17.445568 coreos-metadata[1640]: Jan 14 01:17:17.445 INFO wrote hostname ci-4578.0.0-p-dbef80f9ad to /sysroot/etc/hostname Jan 14 01:17:17.449211 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:17:17.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:17.456659 kernel: audit: type=1130 audit(1768353437.450:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.775685 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:17:18.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.782352 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:17:18.783895 kernel: audit: type=1130 audit(1768353438.775:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.787440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:17:18.826962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:17:18.832700 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:17:18.847840 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:17:18.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.855713 kernel: audit: type=1130 audit(1768353438.849:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.863524 ignition[1743]: INFO : Ignition 2.24.0 Jan 14 01:17:18.863524 ignition[1743]: INFO : Stage: mount Jan 14 01:17:18.873731 kernel: audit: type=1130 audit(1768353438.868:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:18.873806 ignition[1743]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:18.873806 ignition[1743]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:18.873806 ignition[1743]: INFO : mount: mount passed Jan 14 01:17:18.873806 ignition[1743]: INFO : Ignition finished successfully Jan 14 01:17:18.866126 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:17:18.871847 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:17:18.910095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:17:18.932276 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1752) Jan 14 01:17:18.932374 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:17:18.933726 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:17:18.941540 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:17:18.941576 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:17:18.943092 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:17:18.945075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:17:18.968414 ignition[1769]: INFO : Ignition 2.24.0 Jan 14 01:17:18.968414 ignition[1769]: INFO : Stage: files Jan 14 01:17:18.973693 ignition[1769]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:18.973693 ignition[1769]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:18.973693 ignition[1769]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:17:18.985696 ignition[1769]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:17:18.985696 ignition[1769]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:17:19.052490 ignition[1769]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:17:19.055492 ignition[1769]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:17:19.057913 unknown[1769]: wrote ssh authorized keys file for user: core Jan 14 01:17:19.059037 ignition[1769]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:17:19.092142 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:17:19.096716 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 01:17:19.129145 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:17:19.184316 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:17:19.188706 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:17:19.218993 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 01:17:19.508608 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:17:19.662172 ignition[1769]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:17:19.662172 ignition[1769]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:17:19.713349 ignition[1769]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:17:19.722715 ignition[1769]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:17:19.722715 ignition[1769]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:17:19.722715 ignition[1769]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:17:19.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.737386 ignition[1769]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:17:19.737386 ignition[1769]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:17:19.737386 ignition[1769]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:17:19.737386 ignition[1769]: INFO : files: files passed Jan 14 01:17:19.737386 ignition[1769]: INFO : Ignition finished successfully Jan 14 01:17:19.750928 kernel: audit: type=1130 audit(1768353439.732:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.728874 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:17:19.735769 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:17:19.741333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:17:19.759050 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:17:19.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.759145 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:17:19.770783 initrd-setup-root-after-ignition[1800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:17:19.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.780328 initrd-setup-root-after-ignition[1804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:17:19.774004 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:17:19.785686 initrd-setup-root-after-ignition[1800]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:17:19.774211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:17:19.774970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:17:19.826022 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:17:19.827315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:17:19.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.832405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:17:19.836421 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:17:19.838821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:17:19.839526 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:17:19.865672 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:17:19.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.870179 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:17:19.886815 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:17:19.886958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:17:19.890368 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:17:19.894511 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:17:19.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.899132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:17:19.899237 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:17:19.905194 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:17:19.910079 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:17:19.912985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:17:19.917562 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:17:19.919868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:17:19.928420 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:17:19.929269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:17:19.929569 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:17:19.934364 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:17:19.939178 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:17:19.944812 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:17:19.948735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:17:19.948882 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:17:19.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.952085 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:17:19.956809 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:17:19.961020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:17:19.961153 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:17:19.966005 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:17:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.966115 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:17:19.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.967937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:17:19.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.968068 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:17:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.970582 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:17:19.970714 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:17:19.975755 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 01:17:19.975888 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:17:19.980227 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:17:19.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.985690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:17:19.987266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:17:20.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:19.997150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:17:20.011953 ignition[1824]: INFO : Ignition 2.24.0 Jan 14 01:17:20.011953 ignition[1824]: INFO : Stage: umount Jan 14 01:17:20.011953 ignition[1824]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:17:20.011953 ignition[1824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:17:20.011953 ignition[1824]: INFO : umount: umount passed Jan 14 01:17:20.011953 ignition[1824]: INFO : Ignition finished successfully Jan 14 01:17:20.001900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:17:20.002487 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:17:20.007865 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:17:20.007984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:17:20.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.015277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:17:20.015408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:17:20.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.033497 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:17:20.033594 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:17:20.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.049557 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:17:20.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.049757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:17:20.054971 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:17:20.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.055025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:17:20.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.060119 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:17:20.060162 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:17:20.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.062349 systemd[1]: Stopped target network.target - Network. Jan 14 01:17:20.068167 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:17:20.068214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:17:20.071541 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:17:20.079703 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:17:20.083673 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:17:20.085591 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:17:20.090703 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:17:20.093318 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:17:20.093357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:17:20.095631 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:17:20.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.095670 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:17:20.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.097047 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:17:20.097068 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:17:20.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.100695 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:17:20.100738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:17:20.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.104718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:17:20.129000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:17:20.104758 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:17:20.109802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:17:20.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.113719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:17:20.118168 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:17:20.118249 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:17:20.122290 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:17:20.122391 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:17:20.129100 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:17:20.129274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:17:20.129302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:17:20.130410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:17:20.136918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:17:20.136979 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:17:20.142785 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:17:20.145611 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:17:20.146106 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:17:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.162000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:17:20.146199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:17:20.157561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:17:20.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.157880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:17:20.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.175787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:17:20.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.175838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:17:20.190704 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:17:20.190804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:17:20.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.195782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:17:20.195857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:17:20.199771 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:17:20.199808 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:17:20.201607 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:17:20.201688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:17:20.215006 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:17:20.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.215059 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:17:20.218998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:17:20.220667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:17:20.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.225934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:17:20.232155 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a7f3a87 eth0: Data path switched from VF: enP30832s1 Jan 14 01:17:20.234334 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:17:20.234567 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:17:20.235810 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:17:20.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.241108 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:17:20.241167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:17:20.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.247746 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:17:20.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.247793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:20.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.252311 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:17:20.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.252392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:17:20.255908 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:17:20.255985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:17:20.873477 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:17:20.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:20.873588 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:17:20.875363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:17:20.875402 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:17:20.875449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:17:20.876600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:17:20.894225 systemd[1]: Switching root. Jan 14 01:17:20.968907 systemd-journald[978]: Journal stopped Jan 14 01:17:31.377464 systemd-journald[978]: Received SIGTERM from PID 1 (systemd). Jan 14 01:17:31.381387 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:17:31.381413 kernel: SELinux: policy capability open_perms=1 Jan 14 01:17:31.381426 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:17:31.381437 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:17:31.381448 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:17:31.381460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:17:31.381473 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:17:31.381488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:17:31.381501 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:17:31.381513 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 14 01:17:31.381526 kernel: audit: type=1403 audit(1768353447.963:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 01:17:31.381541 systemd[1]: Successfully loaded SELinux policy in 190.264ms. Jan 14 01:17:31.381554 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.363ms. Jan 14 01:17:31.381573 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:17:31.381590 systemd[1]: Detected virtualization microsoft. Jan 14 01:17:31.381603 systemd[1]: Detected architecture x86-64. Jan 14 01:17:31.381620 systemd[1]: Detected first boot. Jan 14 01:17:31.381648 systemd[1]: Hostname set to . Jan 14 01:17:31.381661 systemd[1]: Initializing machine ID from random generator. Jan 14 01:17:31.381672 kernel: audit: type=1334 audit(1768353448.660:86): prog-id=10 op=LOAD Jan 14 01:17:31.381683 kernel: audit: type=1334 audit(1768353448.661:87): prog-id=10 op=UNLOAD Jan 14 01:17:31.381693 kernel: audit: type=1334 audit(1768353448.661:88): prog-id=11 op=LOAD Jan 14 01:17:31.381705 kernel: audit: type=1334 audit(1768353448.661:89): prog-id=11 op=UNLOAD Jan 14 01:17:31.381715 zram_generator::config[1867]: No configuration found. Jan 14 01:17:31.381728 kernel: Guest personality initialized and is inactive Jan 14 01:17:31.381740 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 14 01:17:31.381750 kernel: Initialized host personality Jan 14 01:17:31.381760 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:17:31.381771 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:17:31.381783 kernel: audit: type=1334 audit(1768353450.934:90): prog-id=12 op=LOAD Jan 14 01:17:31.381794 kernel: audit: type=1334 audit(1768353450.934:91): prog-id=3 op=UNLOAD Jan 14 01:17:31.381805 kernel: audit: type=1334 audit(1768353450.934:92): prog-id=13 op=LOAD Jan 14 01:17:31.381816 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:17:31.381827 kernel: audit: type=1334 audit(1768353450.934:93): prog-id=14 op=LOAD Jan 14 01:17:31.381837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:17:31.381848 kernel: audit: type=1334 audit(1768353450.934:94): prog-id=4 op=UNLOAD Jan 14 01:17:31.381865 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:17:31.381879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:17:31.381891 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:17:31.381905 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:17:31.381916 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:17:31.381928 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:17:31.381942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:17:31.381955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:17:31.381966 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:17:31.381978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:17:31.381991 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:17:31.382003 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:17:31.382016 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:17:31.382027 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:17:31.382038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:17:31.382049 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:17:31.382061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:17:31.382073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:17:31.382086 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:17:31.382097 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:17:31.382108 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:17:31.382120 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:17:31.382131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:17:31.382142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:17:31.382155 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:17:31.382167 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:17:31.382178 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:17:31.382190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:17:31.382201 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:17:31.382215 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:17:31.382227 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:17:31.382238 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:17:31.382249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:17:31.382261 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:17:31.382273 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:17:31.382287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:17:31.382298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:17:31.382309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:17:31.382321 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:17:31.382333 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:17:31.382344 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:17:31.382356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:31.382369 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:17:31.382381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:17:31.382392 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:17:31.382405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:17:31.382416 systemd[1]: Reached target machines.target - Containers. Jan 14 01:17:31.382428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:17:31.382442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:17:31.382453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:17:31.382465 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:17:31.382476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:17:31.382487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:17:31.382499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:17:31.382510 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:17:31.382523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:17:31.382536 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:17:31.382548 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:17:31.382560 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:17:31.382571 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:17:31.382582 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:17:31.382594 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:17:31.382608 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:17:31.382619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:17:31.382631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:17:31.383696 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:17:31.383711 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:17:31.383722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:17:31.383738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:31.383750 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:17:31.383762 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:17:31.383775 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:17:31.383786 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:17:31.383798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:17:31.383810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:17:31.383823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:17:31.383834 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:17:31.383844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:17:31.383855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:17:31.383897 systemd-journald[1950]: Collecting audit messages is enabled. Jan 14 01:17:31.383923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:17:31.383939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:17:31.383980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:17:31.384087 systemd-journald[1950]: Journal started Jan 14 01:17:31.384173 systemd-journald[1950]: Runtime Journal (/run/log/journal/734b96892b0d4866a6daf2ce1d3d3b39) is 8M, max 158.5M, 150.5M free. Jan 14 01:17:31.074000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:17:31.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.274000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:17:31.274000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:17:31.275000 audit: BPF prog-id=15 op=LOAD Jan 14 01:17:31.275000 audit: BPF prog-id=16 op=LOAD Jan 14 01:17:31.275000 audit: BPF prog-id=17 op=LOAD Jan 14 01:17:31.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.373000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:17:31.373000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd8d035aa0 a2=4000 a3=0 items=0 ppid=1 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:17:31.373000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:17:31.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:30.922913 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:17:30.935908 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 01:17:30.936394 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:17:31.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.396718 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:17:31.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.396892 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:17:31.397046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:17:31.399740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:17:31.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.403524 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:17:31.420451 kernel: fuse: init (API version 7.41) Jan 14 01:17:31.418082 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:17:31.420973 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:17:31.426722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:17:31.429753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:17:31.429786 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:17:31.431991 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:17:31.435314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:17:31.435423 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:17:31.454796 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:17:31.459193 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:17:31.461927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:17:31.466683 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:17:31.469123 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:17:31.474768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:17:31.478368 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:17:31.479873 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:17:31.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.484173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:17:31.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.487607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:17:31.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.496544 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:17:31.501007 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:17:31.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.506462 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:17:31.512306 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:17:31.516825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:17:31.527588 systemd-journald[1950]: Time spent on flushing to /var/log/journal/734b96892b0d4866a6daf2ce1d3d3b39 is 31.412ms for 1112 entries. Jan 14 01:17:31.527588 systemd-journald[1950]: System Journal (/var/log/journal/734b96892b0d4866a6daf2ce1d3d3b39) is 8M, max 2.2G, 2.2G free. Jan 14 01:17:31.612663 systemd-journald[1950]: Received client request to flush runtime journal. Jan 14 01:17:31.612715 kernel: ACPI: bus type drm_connector registered Jan 14 01:17:31.612739 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:17:31.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.538621 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:17:31.538809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:17:31.552708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:17:31.566736 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:17:31.570819 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:17:31.613665 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:17:31.625762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:17:31.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.643923 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:17:31.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.727019 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:17:31.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.730000 audit: BPF prog-id=18 op=LOAD Jan 14 01:17:31.730000 audit: BPF prog-id=19 op=LOAD Jan 14 01:17:31.730000 audit: BPF prog-id=20 op=LOAD Jan 14 01:17:31.731359 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:17:31.734000 audit: BPF prog-id=21 op=LOAD Jan 14 01:17:31.736761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:17:31.741951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:17:31.745000 audit: BPF prog-id=22 op=LOAD Jan 14 01:17:31.745000 audit: BPF prog-id=23 op=LOAD Jan 14 01:17:31.745000 audit: BPF prog-id=24 op=LOAD Jan 14 01:17:31.747766 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:17:31.750000 audit: BPF prog-id=25 op=LOAD Jan 14 01:17:31.750000 audit: BPF prog-id=26 op=LOAD Jan 14 01:17:31.750000 audit: BPF prog-id=27 op=LOAD Jan 14 01:17:31.751597 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:17:31.791867 systemd-tmpfiles[2026]: ACLs are not supported, ignoring. Jan 14 01:17:31.792074 systemd-tmpfiles[2026]: ACLs are not supported, ignoring. Jan 14 01:17:31.794430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:17:31.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.802118 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:17:31.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.810855 systemd-nsresourced[2027]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:17:31.813687 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:17:31.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.918404 systemd-oomd[2024]: No swap; memory pressure usage will be degraded Jan 14 01:17:31.919178 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:17:31.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:31.938289 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:17:31.941725 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:17:31.950202 systemd-resolved[2025]: Positive Trust Anchors: Jan 14 01:17:31.950210 systemd-resolved[2025]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:17:31.950215 systemd-resolved[2025]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:17:31.950254 systemd-resolved[2025]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:17:31.954168 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:17:32.029657 kernel: loop2: detected capacity change from 0 to 48592 Jan 14 01:17:32.141807 systemd-resolved[2025]: Using system hostname 'ci-4578.0.0-p-dbef80f9ad'. Jan 14 01:17:32.142854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:17:32.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.144762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:17:32.201153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:17:32.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.202000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:17:32.202000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:17:32.202000 audit: BPF prog-id=28 op=LOAD Jan 14 01:17:32.202000 audit: BPF prog-id=29 op=LOAD Jan 14 01:17:32.204139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:17:32.233758 systemd-udevd[2050]: Using default interface naming scheme 'v257'. Jan 14 01:17:32.467657 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 01:17:32.468889 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:17:32.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.472000 audit: BPF prog-id=30 op=LOAD Jan 14 01:17:32.474688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:17:32.528541 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:17:32.585662 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:17:32.593691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#92 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:17:32.616666 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 01:17:32.619657 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 01:17:32.624738 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 01:17:32.624792 kernel: Console: switching to colour dummy device 80x25 Jan 14 01:17:32.630877 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:17:32.638654 kernel: hv_vmbus: registering driver hv_balloon Jan 14 01:17:32.640481 systemd-networkd[2060]: lo: Link UP Jan 14 01:17:32.640491 systemd-networkd[2060]: lo: Gained carrier Jan 14 01:17:32.642243 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:17:32.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.644868 systemd[1]: Reached target network.target - Network. Jan 14 01:17:32.647461 systemd-networkd[2060]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:17:32.647531 systemd-networkd[2060]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:17:32.649762 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:17:32.652748 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:17:32.654320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:17:32.661653 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:17:32.663656 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a7f3a87 eth0: Data path switched to VF: enP30832s1 Jan 14 01:17:32.665352 systemd-networkd[2060]: enP30832s1: Link UP Jan 14 01:17:32.665467 systemd-networkd[2060]: eth0: Link UP Jan 14 01:17:32.665470 systemd-networkd[2060]: eth0: Gained carrier Jan 14 01:17:32.665485 systemd-networkd[2060]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:17:32.671501 systemd-networkd[2060]: enP30832s1: Gained carrier Jan 14 01:17:32.676658 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 01:17:32.682718 systemd-networkd[2060]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:17:32.706997 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:17:32.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.727216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:32.742467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:17:32.742693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:32.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.754671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:32.805924 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:17:32.806117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:32.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.813822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:17:32.925673 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 01:17:32.931661 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 01:17:32.936595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:17:32.938811 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:17:32.968681 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 01:17:32.980652 kernel: loop6: detected capacity change from 0 to 48592 Jan 14 01:17:32.990509 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:17:32.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.999083 kernel: kauditd_printk_skb: 69 callbacks suppressed Jan 14 01:17:32.999138 kernel: audit: type=1130 audit(1768353452.991:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:32.999163 kernel: loop7: detected capacity change from 0 to 50784 Jan 14 01:17:33.014658 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 01:17:33.030184 (sd-merge)[2139]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 14 01:17:33.032580 (sd-merge)[2139]: Merged extensions into '/usr'. Jan 14 01:17:33.036123 systemd[1]: Reload requested from client PID 2002 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:17:33.036198 systemd[1]: Reloading... Jan 14 01:17:33.086663 zram_generator::config[2172]: No configuration found. Jan 14 01:17:33.300820 systemd[1]: Reloading finished in 264 ms. Jan 14 01:17:33.331722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:17:33.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.337159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:17:33.337651 kernel: audit: type=1130 audit(1768353453.334:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.347685 kernel: audit: type=1130 audit(1768353453.341:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.356480 systemd[1]: Starting ensure-sysext.service... Jan 14 01:17:33.359783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:17:33.365659 kernel: audit: type=1334 audit(1768353453.362:165): prog-id=31 op=LOAD Jan 14 01:17:33.365713 kernel: audit: type=1334 audit(1768353453.362:166): prog-id=22 op=UNLOAD Jan 14 01:17:33.362000 audit: BPF prog-id=31 op=LOAD Jan 14 01:17:33.362000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:17:33.362000 audit: BPF prog-id=32 op=LOAD Jan 14 01:17:33.369005 kernel: audit: type=1334 audit(1768353453.362:167): prog-id=32 op=LOAD Jan 14 01:17:33.362000 audit: BPF prog-id=33 op=LOAD Jan 14 01:17:33.370340 kernel: audit: type=1334 audit(1768353453.362:168): prog-id=33 op=LOAD Jan 14 01:17:33.362000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:17:33.371843 kernel: audit: type=1334 audit(1768353453.362:169): prog-id=23 op=UNLOAD Jan 14 01:17:33.362000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:17:33.373122 kernel: audit: type=1334 audit(1768353453.362:170): prog-id=24 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=34 op=LOAD Jan 14 01:17:33.374407 kernel: audit: type=1334 audit(1768353453.363:171): prog-id=34 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=35 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=36 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=37 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=38 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=39 op=LOAD Jan 14 01:17:33.363000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:17:33.363000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:17:33.365000 audit: BPF prog-id=40 op=LOAD Jan 14 01:17:33.365000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:17:33.365000 audit: BPF prog-id=41 op=LOAD Jan 14 01:17:33.365000 audit: BPF prog-id=42 op=LOAD Jan 14 01:17:33.365000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:17:33.365000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:17:33.366000 audit: BPF prog-id=43 op=LOAD Jan 14 01:17:33.366000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:17:33.366000 audit: BPF prog-id=44 op=LOAD Jan 14 01:17:33.367000 audit: BPF prog-id=45 op=LOAD Jan 14 01:17:33.377000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:17:33.377000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:17:33.379000 audit: BPF prog-id=46 op=LOAD Jan 14 01:17:33.379000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:17:33.386057 systemd[1]: Reload requested from client PID 2234 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:17:33.386076 systemd[1]: Reloading... Jan 14 01:17:33.411764 systemd-tmpfiles[2235]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:17:33.411791 systemd-tmpfiles[2235]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:17:33.412057 systemd-tmpfiles[2235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:17:33.413685 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Jan 14 01:17:33.413790 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Jan 14 01:17:33.421451 systemd-tmpfiles[2235]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:17:33.421548 systemd-tmpfiles[2235]: Skipping /boot Jan 14 01:17:33.429623 systemd-tmpfiles[2235]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:17:33.429728 systemd-tmpfiles[2235]: Skipping /boot Jan 14 01:17:33.468661 zram_generator::config[2265]: No configuration found. Jan 14 01:17:33.645581 systemd[1]: Reloading finished in 259 ms. Jan 14 01:17:33.658000 audit: BPF prog-id=47 op=LOAD Jan 14 01:17:33.658000 audit: BPF prog-id=48 op=LOAD Jan 14 01:17:33.658000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:17:33.658000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:17:33.659000 audit: BPF prog-id=49 op=LOAD Jan 14 01:17:33.659000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:17:33.660000 audit: BPF prog-id=50 op=LOAD Jan 14 01:17:33.660000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:17:33.660000 audit: BPF prog-id=51 op=LOAD Jan 14 01:17:33.660000 audit: BPF prog-id=52 op=LOAD Jan 14 01:17:33.660000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:17:33.660000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:17:33.661000 audit: BPF prog-id=53 op=LOAD Jan 14 01:17:33.661000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:17:33.661000 audit: BPF prog-id=54 op=LOAD Jan 14 01:17:33.661000 audit: BPF prog-id=55 op=LOAD Jan 14 01:17:33.661000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:17:33.661000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:17:33.661000 audit: BPF prog-id=56 op=LOAD Jan 14 01:17:33.661000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:17:33.662000 audit: BPF prog-id=57 op=LOAD Jan 14 01:17:33.665000 audit: BPF prog-id=58 op=LOAD Jan 14 01:17:33.665000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:17:33.665000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:17:33.666000 audit: BPF prog-id=59 op=LOAD Jan 14 01:17:33.666000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:17:33.666000 audit: BPF prog-id=60 op=LOAD Jan 14 01:17:33.666000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:17:33.667000 audit: BPF prog-id=61 op=LOAD Jan 14 01:17:33.667000 audit: BPF prog-id=62 op=LOAD Jan 14 01:17:33.667000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:17:33.667000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:17:33.670143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:17:33.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.680079 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:17:33.683866 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:17:33.688927 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:17:33.692410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:17:33.699494 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:17:33.704678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.704855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:17:33.707193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:17:33.711000 audit[2334]: SYSTEM_BOOT pid=2334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.712901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:17:33.716119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:17:33.718627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:17:33.718963 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:17:33.719079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:17:33.719183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.726936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.727184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:17:33.727402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:17:33.727594 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:17:33.727754 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:17:33.727897 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.730500 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:17:33.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.734093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:17:33.734778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:17:33.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.737398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:17:33.737579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:17:33.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.739993 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:17:33.740168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:17:33.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.748812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.749087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:17:33.750026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:17:33.753883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:17:33.757872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:17:33.765580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:17:33.767731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:17:33.767923 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:17:33.768032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:17:33.768202 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:17:33.770752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:17:33.772217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:17:33.774175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:17:33.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.777072 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:17:33.777255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:17:33.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.779527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:17:33.779806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:17:33.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.784148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:17:33.784329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:17:33.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.787841 systemd[1]: Finished ensure-sysext.service. Jan 14 01:17:33.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:33.794589 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:17:33.794819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:17:33.807456 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:17:33.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:17:34.040000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:17:34.040000 audit[2370]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3cf57560 a2=420 a3=0 items=0 ppid=2329 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:17:34.040000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:17:34.041948 augenrules[2370]: No rules Jan 14 01:17:34.042257 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:17:34.042486 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:17:34.310775 systemd-networkd[2060]: eth0: Gained IPv6LL Jan 14 01:17:34.313276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:17:34.316975 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:17:34.704965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:17:34.706987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:17:40.705821 ldconfig[2331]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:17:40.718682 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:17:40.721719 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:17:40.734577 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:17:40.737943 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:17:40.739470 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:17:40.741090 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:17:40.742793 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:17:40.744499 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:17:40.747807 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:17:40.750699 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:17:40.753729 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:17:40.756710 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:17:40.765714 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:17:40.765750 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:17:40.768680 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:17:40.770400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:17:40.774734 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:17:40.778101 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:17:40.780334 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:17:40.783689 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:17:40.788002 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:17:40.790944 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:17:40.794245 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:17:40.798563 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:17:40.801686 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:17:40.804732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:17:40.804757 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:17:40.806352 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 01:17:40.810732 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:17:40.814244 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:17:40.821093 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:17:40.824265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:17:40.828834 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:17:40.832835 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:17:40.834799 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:17:40.838854 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:17:40.841138 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 14 01:17:40.844412 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 01:17:40.847188 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 01:17:40.848142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:17:40.852966 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:17:40.856969 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:17:40.861801 KVP[2394]: KVP starting; pid is:2394 Jan 14 01:17:40.863048 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:17:40.866494 kernel: hv_utils: KVP IC version 4.0 Jan 14 01:17:40.867670 KVP[2394]: KVP LIC Version: 3.1 Jan 14 01:17:40.870810 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:17:40.874976 jq[2391]: false Jan 14 01:17:40.877612 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:17:40.880865 extend-filesystems[2392]: Found /dev/nvme0n1p6 Jan 14 01:17:40.885700 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Refreshing passwd entry cache Jan 14 01:17:40.886677 oslogin_cache_refresh[2393]: Refreshing passwd entry cache Jan 14 01:17:40.886828 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:17:40.888940 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:17:40.889374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:17:40.892049 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:17:40.895499 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:17:40.901906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:17:40.905017 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:17:40.906934 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:17:40.910435 jq[2407]: true Jan 14 01:17:40.915616 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Failure getting users, quitting Jan 14 01:17:40.915612 oslogin_cache_refresh[2393]: Failure getting users, quitting Jan 14 01:17:40.916283 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:17:40.916283 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Refreshing group entry cache Jan 14 01:17:40.915626 oslogin_cache_refresh[2393]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:17:40.916090 oslogin_cache_refresh[2393]: Refreshing group entry cache Jan 14 01:17:40.934953 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Failure getting groups, quitting Jan 14 01:17:40.934953 google_oslogin_nss_cache[2393]: oslogin_cache_refresh[2393]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:17:40.934950 oslogin_cache_refresh[2393]: Failure getting groups, quitting Jan 14 01:17:40.934958 oslogin_cache_refresh[2393]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:17:40.936954 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:17:40.943038 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:17:40.945320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:17:40.953089 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:17:40.961292 jq[2414]: true Jan 14 01:17:40.964689 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:17:40.964939 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:17:40.980687 extend-filesystems[2392]: Found /dev/nvme0n1p9 Jan 14 01:17:40.985229 chronyd[2383]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 01:17:40.988554 extend-filesystems[2392]: Checking size of /dev/nvme0n1p9 Jan 14 01:17:40.991263 chronyd[2383]: Timezone right/UTC failed leap second check, ignoring Jan 14 01:17:40.991558 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 01:17:40.991405 chronyd[2383]: Loaded seccomp filter (level 2) Jan 14 01:17:41.305211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:17:41.423925 extend-filesystems[2392]: Resized partition /dev/nvme0n1p9 Jan 14 01:17:41.448096 update_engine[2406]: I20260114 01:17:41.447571 2406 main.cc:92] Flatcar Update Engine starting Jan 14 01:17:41.503898 systemd-logind[2405]: New seat seat0. Jan 14 01:17:41.532385 systemd-logind[2405]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 01:17:41.532622 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:17:41.547557 extend-filesystems[2466]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:17:41.696731 tar[2412]: linux-amd64/LICENSE Jan 14 01:17:41.699069 tar[2412]: linux-amd64/helm Jan 14 01:17:41.741904 dbus-daemon[2386]: [system] SELinux support is enabled Jan 14 01:17:41.742366 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:17:41.748742 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:17:41.748774 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:17:41.752273 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:17:41.752296 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:17:41.759117 dbus-daemon[2386]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:17:41.760224 update_engine[2406]: I20260114 01:17:41.760175 2406 update_check_scheduler.cc:74] Next update check in 8m22s Jan 14 01:17:41.760651 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:17:41.766857 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:17:41.805231 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 14 01:17:41.813665 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.305 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.309 INFO Fetch successful Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.309 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.312 INFO Fetch successful Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.313 INFO Fetching http://168.63.129.16/machine/e06ee3dd-ca3c-4496-b18f-96914e9b70a0/faad405d%2D14eb%2D4ebd%2Db787%2Ddb16c2a36b19.%5Fci%2D4578.0.0%2Dp%2Ddbef80f9ad?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.314 INFO Fetch successful Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.314 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:17:43.285991 coreos-metadata[2385]: Jan 14 01:17:42.324 INFO Fetch successful Jan 14 01:17:42.355802 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:17:42.359004 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:17:42.839436 locksmithd[2488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:17:43.678213 extend-filesystems[2466]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 01:17:43.678213 extend-filesystems[2466]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 14 01:17:43.678213 extend-filesystems[2466]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 14 01:17:43.631971 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:17:43.685756 extend-filesystems[2392]: Resized filesystem in /dev/nvme0n1p9 Jan 14 01:17:43.632209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:17:43.775849 sshd_keygen[2438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:17:43.776235 bash[2458]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:17:43.776893 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:17:43.782118 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:17:43.818830 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:17:43.824019 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:17:43.829485 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 01:17:43.855048 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:17:43.855286 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:17:43.859313 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:17:43.877847 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 01:17:43.884267 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:17:43.889523 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:17:43.895534 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:17:43.898277 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:17:44.001762 tar[2412]: linux-amd64/README.md Jan 14 01:17:44.018618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:17:44.207012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:17:44.368611 containerd[2417]: time="2026-01-14T01:17:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:17:44.369036 containerd[2417]: time="2026-01-14T01:17:44.369008954Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:17:44.376542 containerd[2417]: time="2026-01-14T01:17:44.376507191Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.495µs" Jan 14 01:17:44.376542 containerd[2417]: time="2026-01-14T01:17:44.376532292Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:17:44.376650 containerd[2417]: time="2026-01-14T01:17:44.376567614Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:17:44.376650 containerd[2417]: time="2026-01-14T01:17:44.376578798Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:17:44.376739 containerd[2417]: time="2026-01-14T01:17:44.376718118Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:17:44.376739 containerd[2417]: time="2026-01-14T01:17:44.376734098Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:17:44.376809 containerd[2417]: time="2026-01-14T01:17:44.376784975Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:17:44.376809 containerd[2417]: time="2026-01-14T01:17:44.376799921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.376984 containerd[2417]: time="2026-01-14T01:17:44.376962832Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.376984 containerd[2417]: time="2026-01-14T01:17:44.376975214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377030 containerd[2417]: time="2026-01-14T01:17:44.376989185Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377030 containerd[2417]: time="2026-01-14T01:17:44.376997487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377493 containerd[2417]: time="2026-01-14T01:17:44.377462385Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377542 containerd[2417]: time="2026-01-14T01:17:44.377497791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377595 containerd[2417]: time="2026-01-14T01:17:44.377578813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377818 containerd[2417]: time="2026-01-14T01:17:44.377801182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377853 containerd[2417]: time="2026-01-14T01:17:44.377838720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:17:44.377874 containerd[2417]: time="2026-01-14T01:17:44.377850323Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:17:44.377897 containerd[2417]: time="2026-01-14T01:17:44.377877051Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:17:44.378721 containerd[2417]: time="2026-01-14T01:17:44.378695494Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:17:44.378797 containerd[2417]: time="2026-01-14T01:17:44.378781025Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:17:44.425177 containerd[2417]: time="2026-01-14T01:17:44.425148835Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:17:44.425267 containerd[2417]: time="2026-01-14T01:17:44.425190229Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:17:44.496040 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496200201Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496234379Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496251119Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496263326Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496274282Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496283736Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496294501Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496305435Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496318143Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496332463Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496342724Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496359623Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:17:44.496499 containerd[2417]: time="2026-01-14T01:17:44.496478847Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496507854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496523069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496534028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496544941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496555564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496567170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496577566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496593045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496613402Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496624896Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496673307Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496721949Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:17:44.496782 containerd[2417]: time="2026-01-14T01:17:44.496734390Z" level=info msg="Start snapshots syncer" Jan 14 01:17:44.498872 containerd[2417]: time="2026-01-14T01:17:44.497291565Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:17:44.498872 containerd[2417]: time="2026-01-14T01:17:44.497727859Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.497783991Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.497842274Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.497968510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.497997084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498011222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498025302Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498045553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498056391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498067979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498078346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498088801Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498119437Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498132505Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:17:44.499045 containerd[2417]: time="2026-01-14T01:17:44.498141075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498152827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498161206Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498208288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498218454Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498235334Z" level=info msg="runtime interface created" Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498240736Z" level=info msg="created NRI interface" Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498248841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498259768Z" level=info msg="Connect containerd service" Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498278820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:17:44.499600 containerd[2417]: time="2026-01-14T01:17:44.498998178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:17:45.032906 kubelet[2539]: E0114 01:17:45.032831 2539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:17:45.034817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:17:45.034955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:17:45.035331 systemd[1]: kubelet.service: Consumed 940ms CPU time, 265.5M memory peak. Jan 14 01:17:45.130185 containerd[2417]: time="2026-01-14T01:17:45.130142620Z" level=info msg="Start subscribing containerd event" Jan 14 01:17:45.130261 containerd[2417]: time="2026-01-14T01:17:45.130197876Z" level=info msg="Start recovering state" Jan 14 01:17:45.130317 containerd[2417]: time="2026-01-14T01:17:45.130303746Z" level=info msg="Start event monitor" Jan 14 01:17:45.130339 containerd[2417]: time="2026-01-14T01:17:45.130324719Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:17:45.130339 containerd[2417]: time="2026-01-14T01:17:45.130332535Z" level=info msg="Start streaming server" Jan 14 01:17:45.130392 containerd[2417]: time="2026-01-14T01:17:45.130341435Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:17:45.130392 containerd[2417]: time="2026-01-14T01:17:45.130349673Z" level=info msg="runtime interface starting up..." Jan 14 01:17:45.130392 containerd[2417]: time="2026-01-14T01:17:45.130356076Z" level=info msg="starting plugins..." Jan 14 01:17:45.130392 containerd[2417]: time="2026-01-14T01:17:45.130368227Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:17:45.130585 containerd[2417]: time="2026-01-14T01:17:45.130571597Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:17:45.130674 containerd[2417]: time="2026-01-14T01:17:45.130663543Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:17:45.131059 containerd[2417]: time="2026-01-14T01:17:45.130749224Z" level=info msg="containerd successfully booted in 0.762442s" Jan 14 01:17:45.130908 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:17:45.133096 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:17:45.138224 systemd[1]: Startup finished in 4.707s (kernel) + 18.686s (initrd) + 17.364s (userspace) = 40.757s. Jan 14 01:17:45.874507 login[2530]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:17:45.875276 login[2529]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:17:45.883949 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:17:45.885923 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:17:45.896517 systemd-logind[2405]: New session 2 of user core. Jan 14 01:17:45.901220 systemd-logind[2405]: New session 1 of user core. Jan 14 01:17:45.919015 waagent[2527]: 2026-01-14T01:17:45.918950Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 14 01:17:45.919402 waagent[2527]: 2026-01-14T01:17:45.919267Z INFO Daemon Daemon OS: flatcar 4578.0.0 Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.919415Z INFO Daemon Daemon Python: 3.12.11 Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.919624Z INFO Daemon Daemon Run daemon Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.919806Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4578.0.0' Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.920037Z INFO Daemon Daemon Using waagent for provisioning Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.920193Z INFO Daemon Daemon Activate resource disk Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.920357Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.921959Z INFO Daemon Daemon Found device: None Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.922238Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.922532Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.923131Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:17:45.925690 waagent[2527]: 2026-01-14T01:17:45.923245Z INFO Daemon Daemon Running default provisioning handler Jan 14 01:17:45.939584 waagent[2527]: 2026-01-14T01:17:45.938489Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 01:17:45.944492 waagent[2527]: 2026-01-14T01:17:45.944445Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 01:17:45.946622 waagent[2527]: 2026-01-14T01:17:45.944624Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 01:17:45.946622 waagent[2527]: 2026-01-14T01:17:45.944915Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 01:17:46.007071 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:17:46.009490 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:17:46.023674 (systemd)[2574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:17:46.025685 systemd-logind[2405]: New session 3 of user core. Jan 14 01:17:46.450064 systemd[2574]: Queued start job for default target default.target. Jan 14 01:17:46.456427 systemd[2574]: Created slice app.slice - User Application Slice. Jan 14 01:17:46.456460 systemd[2574]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:17:46.456476 systemd[2574]: Reached target paths.target - Paths. Jan 14 01:17:46.456518 systemd[2574]: Reached target timers.target - Timers. Jan 14 01:17:46.457969 systemd[2574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:17:46.458716 systemd[2574]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:17:46.470288 waagent[2527]: 2026-01-14T01:17:46.470216Z INFO Daemon Daemon Successfully mounted dvd Jan 14 01:17:46.477296 systemd[2574]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:17:46.481053 systemd[2574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:17:46.481151 systemd[2574]: Reached target sockets.target - Sockets. Jan 14 01:17:46.481186 systemd[2574]: Reached target basic.target - Basic System. Jan 14 01:17:46.481214 systemd[2574]: Reached target default.target - Main User Target. Jan 14 01:17:46.481238 systemd[2574]: Startup finished in 451ms. Jan 14 01:17:46.481971 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:17:46.487806 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:17:46.488528 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 01:17:46.516121 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 01:17:46.518720 waagent[2527]: 2026-01-14T01:17:46.518671Z INFO Daemon Daemon Detect protocol endpoint Jan 14 01:17:46.519714 waagent[2527]: 2026-01-14T01:17:46.518950Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:17:46.519994 waagent[2527]: 2026-01-14T01:17:46.519958Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 01:17:46.520358 waagent[2527]: 2026-01-14T01:17:46.520238Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 01:17:46.520735 waagent[2527]: 2026-01-14T01:17:46.520709Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 01:17:46.521112 waagent[2527]: 2026-01-14T01:17:46.521091Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 01:17:46.602905 waagent[2527]: 2026-01-14T01:17:46.602708Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 01:17:46.605714 waagent[2527]: 2026-01-14T01:17:46.605686Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 01:17:46.607414 waagent[2527]: 2026-01-14T01:17:46.607367Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 01:17:46.946341 waagent[2527]: 2026-01-14T01:17:46.946261Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 01:17:46.948037 waagent[2527]: 2026-01-14T01:17:46.947438Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 01:17:46.956928 waagent[2527]: 2026-01-14T01:17:46.956889Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:17:46.968863 waagent[2527]: 2026-01-14T01:17:46.968828Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 14 01:17:46.970292 waagent[2527]: 2026-01-14T01:17:46.970256Z INFO Daemon Jan 14 01:17:46.970935 waagent[2527]: 2026-01-14T01:17:46.970535Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6e38a32c-be3c-4098-8bfd-03511d45619d eTag: 10029001851618089242 source: Fabric] Jan 14 01:17:46.973344 waagent[2527]: 2026-01-14T01:17:46.973310Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 01:17:46.974892 waagent[2527]: 2026-01-14T01:17:46.974860Z INFO Daemon Jan 14 01:17:46.975622 waagent[2527]: 2026-01-14T01:17:46.975595Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:17:46.982245 waagent[2527]: 2026-01-14T01:17:46.982206Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 01:17:47.058999 waagent[2527]: 2026-01-14T01:17:47.058949Z INFO Daemon Downloaded certificate {'thumbprint': '385C0CE2D1026FFDDB0DC16C651D2A3AFFB14354', 'hasPrivateKey': True} Jan 14 01:17:47.061679 waagent[2527]: 2026-01-14T01:17:47.061623Z INFO Daemon Fetch goal state completed Jan 14 01:17:47.068213 waagent[2527]: 2026-01-14T01:17:47.068182Z INFO Daemon Daemon Starting provisioning Jan 14 01:17:47.071322 waagent[2527]: 2026-01-14T01:17:47.068357Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 01:17:47.071322 waagent[2527]: 2026-01-14T01:17:47.068719Z INFO Daemon Daemon Set hostname [ci-4578.0.0-p-dbef80f9ad] Jan 14 01:17:47.987666 waagent[2527]: 2026-01-14T01:17:47.987580Z INFO Daemon Daemon Publish hostname [ci-4578.0.0-p-dbef80f9ad] Jan 14 01:17:47.989486 waagent[2527]: 2026-01-14T01:17:47.989437Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 01:17:47.991123 waagent[2527]: 2026-01-14T01:17:47.991086Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 01:17:47.999042 systemd-networkd[2060]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:17:47.999051 systemd-networkd[2060]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:17:47.999129 systemd-networkd[2060]: eth0: DHCP lease lost Jan 14 01:17:48.016035 waagent[2527]: 2026-01-14T01:17:48.015958Z INFO Daemon Daemon Create user account if not exists Jan 14 01:17:48.017489 waagent[2527]: 2026-01-14T01:17:48.016221Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 01:17:48.017489 waagent[2527]: 2026-01-14T01:17:48.016447Z INFO Daemon Daemon Configure sudoer Jan 14 01:17:48.023679 systemd-networkd[2060]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:17:48.732774 waagent[2527]: 2026-01-14T01:17:48.732699Z INFO Daemon Daemon Configure sshd Jan 14 01:17:49.428598 waagent[2527]: 2026-01-14T01:17:49.428510Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 01:17:49.434553 waagent[2527]: 2026-01-14T01:17:49.428846Z INFO Daemon Daemon Deploy ssh public key. Jan 14 01:17:49.501994 waagent[2527]: 2026-01-14T01:17:49.501954Z INFO Daemon Daemon Provisioning complete Jan 14 01:17:49.511533 waagent[2527]: 2026-01-14T01:17:49.511498Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 01:17:49.511933 waagent[2527]: 2026-01-14T01:17:49.511725Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 01:17:49.514247 waagent[2527]: 2026-01-14T01:17:49.511974Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 14 01:17:49.629999 waagent[2621]: 2026-01-14T01:17:49.629937Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 14 01:17:49.630256 waagent[2621]: 2026-01-14T01:17:49.630046Z INFO ExtHandler ExtHandler OS: flatcar 4578.0.0 Jan 14 01:17:49.630256 waagent[2621]: 2026-01-14T01:17:49.630102Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 14 01:17:49.630256 waagent[2621]: 2026-01-14T01:17:49.630148Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 14 01:17:49.668598 waagent[2621]: 2026-01-14T01:17:49.668550Z INFO ExtHandler ExtHandler Distro: flatcar-4578.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 14 01:17:49.668771 waagent[2621]: 2026-01-14T01:17:49.668744Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:17:49.668834 waagent[2621]: 2026-01-14T01:17:49.668807Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:17:49.675850 waagent[2621]: 2026-01-14T01:17:49.675790Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:17:49.680663 waagent[2621]: 2026-01-14T01:17:49.680582Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 14 01:17:49.681010 waagent[2621]: 2026-01-14T01:17:49.680977Z INFO ExtHandler Jan 14 01:17:49.681065 waagent[2621]: 2026-01-14T01:17:49.681043Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f9defea1-e27b-469d-a5b1-e0808e119255 eTag: 10029001851618089242 source: Fabric] Jan 14 01:17:49.681287 waagent[2621]: 2026-01-14T01:17:49.681259Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:17:49.681709 waagent[2621]: 2026-01-14T01:17:49.681680Z INFO ExtHandler Jan 14 01:17:49.681751 waagent[2621]: 2026-01-14T01:17:49.681733Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:17:49.686459 waagent[2621]: 2026-01-14T01:17:49.686428Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:17:49.755586 waagent[2621]: 2026-01-14T01:17:49.755534Z INFO ExtHandler Downloaded certificate {'thumbprint': '385C0CE2D1026FFDDB0DC16C651D2A3AFFB14354', 'hasPrivateKey': True} Jan 14 01:17:49.755961 waagent[2621]: 2026-01-14T01:17:49.755929Z INFO ExtHandler Fetch goal state completed Jan 14 01:17:49.766449 waagent[2621]: 2026-01-14T01:17:49.766405Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 14 01:17:49.770671 waagent[2621]: 2026-01-14T01:17:49.770581Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2621 Jan 14 01:17:49.770774 waagent[2621]: 2026-01-14T01:17:49.770736Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 01:17:49.771030 waagent[2621]: 2026-01-14T01:17:49.771003Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 14 01:17:49.772110 waagent[2621]: 2026-01-14T01:17:49.772075Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 01:17:49.772414 waagent[2621]: 2026-01-14T01:17:49.772385Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 14 01:17:49.772526 waagent[2621]: 2026-01-14T01:17:49.772502Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 14 01:17:49.773019 waagent[2621]: 2026-01-14T01:17:49.772991Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 01:17:49.822554 waagent[2621]: 2026-01-14T01:17:49.822524Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 01:17:49.822722 waagent[2621]: 2026-01-14T01:17:49.822698Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 01:17:49.828374 waagent[2621]: 2026-01-14T01:17:49.828000Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 01:17:49.833091 systemd[1]: Reload requested from client PID 2636 ('systemctl') (unit waagent.service)... Jan 14 01:17:49.833104 systemd[1]: Reloading... Jan 14 01:17:49.896670 zram_generator::config[2674]: No configuration found. Jan 14 01:17:50.104162 systemd[1]: Reloading finished in 270 ms. Jan 14 01:17:50.117659 waagent[2621]: 2026-01-14T01:17:50.117333Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 01:17:50.117659 waagent[2621]: 2026-01-14T01:17:50.117481Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 01:17:50.427856 waagent[2621]: 2026-01-14T01:17:50.427752Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 01:17:50.428083 waagent[2621]: 2026-01-14T01:17:50.428054Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 14 01:17:50.428846 waagent[2621]: 2026-01-14T01:17:50.428797Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 01:17:50.429159 waagent[2621]: 2026-01-14T01:17:50.429126Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 01:17:50.429331 waagent[2621]: 2026-01-14T01:17:50.429215Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:17:50.429485 waagent[2621]: 2026-01-14T01:17:50.429461Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:17:50.429562 waagent[2621]: 2026-01-14T01:17:50.429535Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:17:50.429625 waagent[2621]: 2026-01-14T01:17:50.429603Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:17:50.429921 waagent[2621]: 2026-01-14T01:17:50.429895Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 01:17:50.430081 waagent[2621]: 2026-01-14T01:17:50.430055Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 01:17:50.430081 waagent[2621]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 01:17:50.430081 waagent[2621]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 01:17:50.430081 waagent[2621]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 01:17:50.430081 waagent[2621]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:17:50.430081 waagent[2621]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:17:50.430081 waagent[2621]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:17:50.430400 waagent[2621]: 2026-01-14T01:17:50.430355Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 01:17:50.430584 waagent[2621]: 2026-01-14T01:17:50.430557Z INFO EnvHandler ExtHandler Configure routes Jan 14 01:17:50.430652 waagent[2621]: 2026-01-14T01:17:50.430614Z INFO EnvHandler ExtHandler Gateway:None Jan 14 01:17:50.430775 waagent[2621]: 2026-01-14T01:17:50.430683Z INFO EnvHandler ExtHandler Routes:None Jan 14 01:17:50.431018 waagent[2621]: 2026-01-14T01:17:50.430995Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 01:17:50.431271 waagent[2621]: 2026-01-14T01:17:50.431225Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 01:17:50.431329 waagent[2621]: 2026-01-14T01:17:50.431290Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 01:17:50.431768 waagent[2621]: 2026-01-14T01:17:50.431736Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 01:17:50.436882 waagent[2621]: 2026-01-14T01:17:50.436849Z INFO ExtHandler ExtHandler Jan 14 01:17:50.437027 waagent[2621]: 2026-01-14T01:17:50.436999Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8f49d777-cb45-4664-97c0-949fce73f9f4 correlation 9d733e6d-0d48-407d-8f5f-837a80ecd238 created: 2026-01-14T01:16:42.330627Z] Jan 14 01:17:50.437792 waagent[2621]: 2026-01-14T01:17:50.437762Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:17:50.438356 waagent[2621]: 2026-01-14T01:17:50.438330Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 01:17:50.467474 waagent[2621]: 2026-01-14T01:17:50.467428Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 14 01:17:50.467474 waagent[2621]: Try `iptables -h' or 'iptables --help' for more information.) Jan 14 01:17:50.467816 waagent[2621]: 2026-01-14T01:17:50.467787Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 51B25CD2-5F90-4120-B557-3197791F0333;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 14 01:17:50.523541 waagent[2621]: 2026-01-14T01:17:50.523490Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 14 01:17:50.523541 waagent[2621]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.523541 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.523541 waagent[2621]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.523541 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.523541 waagent[2621]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.523541 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.523541 waagent[2621]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:17:50.523541 waagent[2621]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:17:50.523541 waagent[2621]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:17:50.526387 waagent[2621]: 2026-01-14T01:17:50.526337Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 01:17:50.526387 waagent[2621]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.526387 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.526387 waagent[2621]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.526387 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.526387 waagent[2621]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:17:50.526387 waagent[2621]: pkts bytes target prot opt in out source destination Jan 14 01:17:50.526387 waagent[2621]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:17:50.526387 waagent[2621]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:17:50.526387 waagent[2621]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:17:50.534791 waagent[2621]: 2026-01-14T01:17:50.534746Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 01:17:50.534791 waagent[2621]: Executing ['ip', '-a', '-o', 'link']: Jan 14 01:17:50.534791 waagent[2621]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 01:17:50.534791 waagent[2621]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:7f:3a:87 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx000d3a7f3a87 Jan 14 01:17:50.534791 waagent[2621]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:7f:3a:87 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 14 01:17:50.534791 waagent[2621]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 01:17:50.534791 waagent[2621]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 01:17:50.534791 waagent[2621]: 2: eth0 inet 10.200.4.14/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 01:17:50.534791 waagent[2621]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 01:17:50.534791 waagent[2621]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 01:17:50.534791 waagent[2621]: 2: eth0 inet6 fe80::20d:3aff:fe7f:3a87/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 01:17:55.249625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:17:55.251126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:17:55.763441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:17:55.772861 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:17:55.814247 kubelet[2778]: E0114 01:17:55.814193 2778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:17:55.817131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:17:55.817266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:17:55.817598 systemd[1]: kubelet.service: Consumed 136ms CPU time, 111.3M memory peak. Jan 14 01:17:59.014290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:17:59.015414 systemd[1]: Started sshd@0-10.200.4.14:22-10.200.16.10:45064.service - OpenSSH per-connection server daemon (10.200.16.10:45064). Jan 14 01:17:59.721665 sshd[2787]: Accepted publickey for core from 10.200.16.10 port 45064 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:17:59.722767 sshd-session[2787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:17:59.727254 systemd-logind[2405]: New session 4 of user core. Jan 14 01:17:59.731796 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:18:00.135435 systemd[1]: Started sshd@1-10.200.4.14:22-10.200.16.10:57298.service - OpenSSH per-connection server daemon (10.200.16.10:57298). Jan 14 01:18:00.668422 sshd[2794]: Accepted publickey for core from 10.200.16.10 port 57298 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:00.669706 sshd-session[2794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:00.674359 systemd-logind[2405]: New session 5 of user core. Jan 14 01:18:00.680808 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:18:00.970381 sshd[2798]: Connection closed by 10.200.16.10 port 57298 Jan 14 01:18:00.972033 sshd-session[2794]: pam_unix(sshd:session): session closed for user core Jan 14 01:18:00.975166 systemd-logind[2405]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:18:00.975744 systemd[1]: sshd@1-10.200.4.14:22-10.200.16.10:57298.service: Deactivated successfully. Jan 14 01:18:00.977178 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:18:00.978351 systemd-logind[2405]: Removed session 5. Jan 14 01:18:01.084197 systemd[1]: Started sshd@2-10.200.4.14:22-10.200.16.10:57302.service - OpenSSH per-connection server daemon (10.200.16.10:57302). Jan 14 01:18:01.619367 sshd[2804]: Accepted publickey for core from 10.200.16.10 port 57302 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:01.620153 sshd-session[2804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:01.624577 systemd-logind[2405]: New session 6 of user core. Jan 14 01:18:01.632806 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:18:01.918111 sshd[2808]: Connection closed by 10.200.16.10 port 57302 Jan 14 01:18:01.918811 sshd-session[2804]: pam_unix(sshd:session): session closed for user core Jan 14 01:18:01.922578 systemd[1]: sshd@2-10.200.4.14:22-10.200.16.10:57302.service: Deactivated successfully. Jan 14 01:18:01.924173 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:18:01.924955 systemd-logind[2405]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:18:01.925987 systemd-logind[2405]: Removed session 6. Jan 14 01:18:02.033161 systemd[1]: Started sshd@3-10.200.4.14:22-10.200.16.10:57316.service - OpenSSH per-connection server daemon (10.200.16.10:57316). Jan 14 01:18:02.568832 sshd[2814]: Accepted publickey for core from 10.200.16.10 port 57316 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:02.569951 sshd-session[2814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:02.574331 systemd-logind[2405]: New session 7 of user core. Jan 14 01:18:02.580825 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:18:02.872356 sshd[2818]: Connection closed by 10.200.16.10 port 57316 Jan 14 01:18:02.873015 sshd-session[2814]: pam_unix(sshd:session): session closed for user core Jan 14 01:18:02.876317 systemd-logind[2405]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:18:02.876473 systemd[1]: sshd@3-10.200.4.14:22-10.200.16.10:57316.service: Deactivated successfully. Jan 14 01:18:02.877899 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:18:02.879306 systemd-logind[2405]: Removed session 7. Jan 14 01:18:02.982123 systemd[1]: Started sshd@4-10.200.4.14:22-10.200.16.10:57330.service - OpenSSH per-connection server daemon (10.200.16.10:57330). Jan 14 01:18:03.518672 sshd[2824]: Accepted publickey for core from 10.200.16.10 port 57330 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:03.519428 sshd-session[2824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:03.523874 systemd-logind[2405]: New session 8 of user core. Jan 14 01:18:03.533821 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:18:03.861988 sudo[2829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:18:03.862250 sudo[2829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:18:03.888403 sudo[2829]: pam_unix(sudo:session): session closed for user root Jan 14 01:18:03.988316 sshd[2828]: Connection closed by 10.200.16.10 port 57330 Jan 14 01:18:03.989839 sshd-session[2824]: pam_unix(sshd:session): session closed for user core Jan 14 01:18:03.993198 systemd[1]: sshd@4-10.200.4.14:22-10.200.16.10:57330.service: Deactivated successfully. Jan 14 01:18:03.994793 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:18:03.995500 systemd-logind[2405]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:18:03.996766 systemd-logind[2405]: Removed session 8. Jan 14 01:18:04.102239 systemd[1]: Started sshd@5-10.200.4.14:22-10.200.16.10:57336.service - OpenSSH per-connection server daemon (10.200.16.10:57336). Jan 14 01:18:04.643676 sshd[2836]: Accepted publickey for core from 10.200.16.10 port 57336 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:04.644744 sshd-session[2836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:04.649160 systemd-logind[2405]: New session 9 of user core. Jan 14 01:18:04.653802 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:18:04.777880 chronyd[2383]: Selected source PHC0 Jan 14 01:18:04.848056 sudo[2842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:18:04.848299 sudo[2842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:18:04.853609 sudo[2842]: pam_unix(sudo:session): session closed for user root Jan 14 01:18:04.858227 sudo[2841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:18:04.858467 sudo[2841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:18:04.864758 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:18:04.897892 kernel: kauditd_printk_skb: 79 callbacks suppressed Jan 14 01:18:04.897946 kernel: audit: type=1305 audit(1768353484.892:249): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:18:04.892000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:18:04.894377 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:18:04.898088 augenrules[2866]: No rules Jan 14 01:18:04.894590 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:18:04.898436 sudo[2841]: pam_unix(sudo:session): session closed for user root Jan 14 01:18:04.892000 audit[2866]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc0d06d810 a2=420 a3=0 items=0 ppid=2847 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:04.892000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:18:04.905380 kernel: audit: type=1300 audit(1768353484.892:249): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc0d06d810 a2=420 a3=0 items=0 ppid=2847 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:04.905419 kernel: audit: type=1327 audit(1768353484.892:249): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:18:04.905441 kernel: audit: type=1130 audit(1768353484.893:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.911908 kernel: audit: type=1131 audit(1768353484.893:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.896000 audit[2841]: USER_END pid=2841 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.915537 kernel: audit: type=1106 audit(1768353484.896:252): pid=2841 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.896000 audit[2841]: CRED_DISP pid=2841 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.918704 kernel: audit: type=1104 audit(1768353484.896:253): pid=2841 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:04.997394 sshd[2840]: Connection closed by 10.200.16.10 port 57336 Jan 14 01:18:04.997781 sshd-session[2836]: pam_unix(sshd:session): session closed for user core Jan 14 01:18:04.997000 audit[2836]: USER_END pid=2836 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.005854 kernel: audit: type=1106 audit(1768353484.997:254): pid=2836 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.005920 kernel: audit: type=1104 audit(1768353484.997:255): pid=2836 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:04.997000 audit[2836]: CRED_DISP pid=2836 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.006520 systemd[1]: sshd@5-10.200.4.14:22-10.200.16.10:57336.service: Deactivated successfully. Jan 14 01:18:05.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.14:22-10.200.16.10:57336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.010679 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:18:05.011651 kernel: audit: type=1131 audit(1768353485.005:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.14:22-10.200.16.10:57336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.011935 systemd-logind[2405]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:18:05.012760 systemd-logind[2405]: Removed session 9. Jan 14 01:18:05.105945 systemd[1]: Started sshd@6-10.200.4.14:22-10.200.16.10:57342.service - OpenSSH per-connection server daemon (10.200.16.10:57342). Jan 14 01:18:05.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.14:22-10.200.16.10:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.645000 audit[2875]: USER_ACCT pid=2875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.646197 sshd[2875]: Accepted publickey for core from 10.200.16.10 port 57342 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:18:05.646000 audit[2875]: CRED_ACQ pid=2875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.646000 audit[2875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a37a610 a2=3 a3=0 items=0 ppid=1 pid=2875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:05.646000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:18:05.647314 sshd-session[2875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:18:05.651567 systemd-logind[2405]: New session 10 of user core. Jan 14 01:18:05.658787 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:18:05.660000 audit[2875]: USER_START pid=2875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.661000 audit[2879]: CRED_ACQ pid=2879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:18:05.848000 audit[2880]: USER_ACCT pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.849869 sudo[2880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:18:05.848000 audit[2880]: CRED_REFR pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.850124 sudo[2880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:18:05.848000 audit[2880]: USER_START pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:18:05.999607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:18:06.000873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:18:07.138605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:18:07.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:07.141676 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:18:07.181838 kubelet[2902]: E0114 01:18:07.181785 2902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:18:07.183276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:18:07.183373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:18:07.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:07.183887 systemd[1]: kubelet.service: Consumed 130ms CPU time, 110.1M memory peak. Jan 14 01:18:08.079167 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:18:08.089906 (dockerd)[2913]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:18:09.890917 dockerd[2913]: time="2026-01-14T01:18:09.890742384Z" level=info msg="Starting up" Jan 14 01:18:10.035250 dockerd[2913]: time="2026-01-14T01:18:10.035218284Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:18:10.044760 dockerd[2913]: time="2026-01-14T01:18:10.044726948Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:18:10.098455 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1432851439-merged.mount: Deactivated successfully. Jan 14 01:18:11.793968 dockerd[2913]: time="2026-01-14T01:18:11.793919096Z" level=info msg="Loading containers: start." Jan 14 01:18:11.844661 kernel: Initializing XFRM netlink socket Jan 14 01:18:11.871776 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 01:18:11.871858 kernel: audit: type=1325 audit(1768353491.869:268): table=nat:5 family=2 entries=2 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.869000 audit[2959]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.879678 kernel: audit: type=1300 audit(1768353491.869:268): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe5c024c00 a2=0 a3=0 items=0 ppid=2913 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.869000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe5c024c00 a2=0 a3=0 items=0 ppid=2913 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:18:11.881764 kernel: audit: type=1327 audit(1768353491.869:268): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:18:11.873000 audit[2961]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.884596 kernel: audit: type=1325 audit(1768353491.873:269): table=filter:6 family=2 entries=2 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.873000 audit[2961]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdfe04c970 a2=0 a3=0 items=0 ppid=2913 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.889654 kernel: audit: type=1300 audit(1768353491.873:269): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdfe04c970 a2=0 a3=0 items=0 ppid=2913 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:18:11.893173 kernel: audit: type=1327 audit(1768353491.873:269): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:18:11.880000 audit[2963]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.896711 kernel: audit: type=1325 audit(1768353491.880:270): table=filter:7 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.880000 audit[2963]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2ef864a0 a2=0 a3=0 items=0 ppid=2913 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.901602 kernel: audit: type=1300 audit(1768353491.880:270): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2ef864a0 a2=0 a3=0 items=0 ppid=2913 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.880000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:18:11.905235 kernel: audit: type=1327 audit(1768353491.880:270): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:18:11.886000 audit[2965]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.907867 kernel: audit: type=1325 audit(1768353491.886:271): table=filter:8 family=2 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.886000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebc7d0920 a2=0 a3=0 items=0 ppid=2913 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:18:11.889000 audit[2967]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.889000 audit[2967]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc935ef100 a2=0 a3=0 items=0 ppid=2913 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:18:11.895000 audit[2969]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.895000 audit[2969]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd8649c0a0 a2=0 a3=0 items=0 ppid=2913 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:18:11.900000 audit[2971]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.900000 audit[2971]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbef452b0 a2=0 a3=0 items=0 ppid=2913 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:18:11.904000 audit[2973]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.904000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdf3aaa850 a2=0 a3=0 items=0 ppid=2913 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.904000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:18:11.950000 audit[2976]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.950000 audit[2976]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffee47552a0 a2=0 a3=0 items=0 ppid=2913 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:18:11.952000 audit[2978]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.952000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeec61b520 a2=0 a3=0 items=0 ppid=2913 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:18:11.954000 audit[2980]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.954000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd3749fa20 a2=0 a3=0 items=0 ppid=2913 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:18:11.955000 audit[2982]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.955000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffde379d500 a2=0 a3=0 items=0 ppid=2913 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:18:11.957000 audit[2984]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:11.957000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffec0c28fb0 a2=0 a3=0 items=0 ppid=2913 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:11.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:18:12.155000 audit[3014]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.155000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc6fa51260 a2=0 a3=0 items=0 ppid=2913 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:18:12.157000 audit[3016]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.157000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd4cc91460 a2=0 a3=0 items=0 ppid=2913 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:18:12.159000 audit[3018]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.159000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe32ab700 a2=0 a3=0 items=0 ppid=2913 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:18:12.161000 audit[3020]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.161000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3810f2e0 a2=0 a3=0 items=0 ppid=2913 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:18:12.162000 audit[3022]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.162000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdee4ea980 a2=0 a3=0 items=0 ppid=2913 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:18:12.164000 audit[3024]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.164000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbda1c290 a2=0 a3=0 items=0 ppid=2913 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:18:12.165000 audit[3026]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.165000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffec8ab340 a2=0 a3=0 items=0 ppid=2913 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:18:12.167000 audit[3028]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.167000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd40bf43b0 a2=0 a3=0 items=0 ppid=2913 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:18:12.169000 audit[3030]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.169000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff4af8fa00 a2=0 a3=0 items=0 ppid=2913 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:18:12.171000 audit[3032]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.171000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee331d4e0 a2=0 a3=0 items=0 ppid=2913 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:18:12.173000 audit[3034]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.173000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffccd3268f0 a2=0 a3=0 items=0 ppid=2913 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:18:12.175000 audit[3036]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.175000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd4f0beb00 a2=0 a3=0 items=0 ppid=2913 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:18:12.176000 audit[3038]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.176000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff8d3d9230 a2=0 a3=0 items=0 ppid=2913 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:18:12.181000 audit[3043]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.181000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1ade80c0 a2=0 a3=0 items=0 ppid=2913 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:18:12.182000 audit[3045]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.182000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc4087cbf0 a2=0 a3=0 items=0 ppid=2913 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:18:12.184000 audit[3047]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.184000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcdd4a7c50 a2=0 a3=0 items=0 ppid=2913 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:18:12.186000 audit[3049]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.186000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5124ffe0 a2=0 a3=0 items=0 ppid=2913 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:18:12.188000 audit[3051]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.188000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda8e43380 a2=0 a3=0 items=0 ppid=2913 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.188000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:18:12.189000 audit[3053]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:18:12.189000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffc7dd3e80 a2=0 a3=0 items=0 ppid=2913 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:18:12.322000 audit[3058]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.322000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc220594c0 a2=0 a3=0 items=0 ppid=2913 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:18:12.324000 audit[3060]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.324000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd4676fe40 a2=0 a3=0 items=0 ppid=2913 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:18:12.332000 audit[3068]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.332000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffef5f163b0 a2=0 a3=0 items=0 ppid=2913 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:18:12.336000 audit[3073]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.336000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeeb29fb20 a2=0 a3=0 items=0 ppid=2913 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:18:12.338000 audit[3075]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.338000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc7c44a310 a2=0 a3=0 items=0 ppid=2913 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:18:12.340000 audit[3077]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.340000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc339a5140 a2=0 a3=0 items=0 ppid=2913 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:18:12.341000 audit[3079]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.341000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd889bf460 a2=0 a3=0 items=0 ppid=2913 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:18:12.343000 audit[3081]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:18:12.343000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd627b7010 a2=0 a3=0 items=0 ppid=2913 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:18:12.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:18:12.345758 systemd-networkd[2060]: docker0: Link UP Jan 14 01:18:12.374439 dockerd[2913]: time="2026-01-14T01:18:12.374404856Z" level=info msg="Loading containers: done." Jan 14 01:18:12.385605 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2614141916-merged.mount: Deactivated successfully. Jan 14 01:18:15.425261 dockerd[2913]: time="2026-01-14T01:18:15.425194778Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:18:15.425713 dockerd[2913]: time="2026-01-14T01:18:15.425305273Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:18:15.425713 dockerd[2913]: time="2026-01-14T01:18:15.425395296Z" level=info msg="Initializing buildkit" Jan 14 01:18:15.883392 dockerd[2913]: time="2026-01-14T01:18:15.883349304Z" level=info msg="Completed buildkit initialization" Jan 14 01:18:15.889649 dockerd[2913]: time="2026-01-14T01:18:15.889594434Z" level=info msg="Daemon has completed initialization" Jan 14 01:18:15.890256 dockerd[2913]: time="2026-01-14T01:18:15.889774961Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:18:15.889926 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:18:15.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:16.894295 containerd[2417]: time="2026-01-14T01:18:16.894257387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 01:18:17.249554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:18:17.251499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:18:20.825024 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 01:18:21.263617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:18:21.269513 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 14 01:18:21.269584 kernel: audit: type=1130 audit(1768353501.263:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:21.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:21.271939 (kubelet)[3128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:18:21.306109 kubelet[3128]: E0114 01:18:21.306076 3128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:18:21.307535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:18:21.307684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:18:21.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:21.308009 systemd[1]: kubelet.service: Consumed 130ms CPU time, 108.7M memory peak. Jan 14 01:18:21.311663 kernel: audit: type=1131 audit(1768353501.307:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:27.106506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589423638.mount: Deactivated successfully. Jan 14 01:18:27.384760 update_engine[2406]: I20260114 01:18:27.384582 2406 update_attempter.cc:509] Updating boot flags... Jan 14 01:18:28.781271 containerd[2417]: time="2026-01-14T01:18:28.781197210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:29.126780 containerd[2417]: time="2026-01-14T01:18:29.126617834Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29059575" Jan 14 01:18:29.130806 containerd[2417]: time="2026-01-14T01:18:29.130759684Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:29.177951 containerd[2417]: time="2026-01-14T01:18:29.177738276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:29.178782 containerd[2417]: time="2026-01-14T01:18:29.178759238Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 12.284468275s" Jan 14 01:18:29.178837 containerd[2417]: time="2026-01-14T01:18:29.178790722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 01:18:29.179482 containerd[2417]: time="2026-01-14T01:18:29.179461505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 01:18:31.499768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:18:31.503468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:18:37.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:37.063842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:18:37.068689 kernel: audit: type=1130 audit(1768353517.063:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:37.072888 (kubelet)[3229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:18:37.119158 kubelet[3229]: E0114 01:18:37.119116 3229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:18:37.121065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:18:37.121187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:18:37.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:37.121501 systemd[1]: kubelet.service: Consumed 141ms CPU time, 110.2M memory peak. Jan 14 01:18:37.127654 kernel: audit: type=1131 audit(1768353517.120:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:38.325275 containerd[2417]: time="2026-01-14T01:18:38.325221999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:38.418478 containerd[2417]: time="2026-01-14T01:18:38.418417719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 01:18:38.767114 containerd[2417]: time="2026-01-14T01:18:38.767049705Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:38.877938 containerd[2417]: time="2026-01-14T01:18:38.877869853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:38.879051 containerd[2417]: time="2026-01-14T01:18:38.878927019Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 9.699437963s" Jan 14 01:18:38.879051 containerd[2417]: time="2026-01-14T01:18:38.878960423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 01:18:38.880075 containerd[2417]: time="2026-01-14T01:18:38.879743406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 01:18:47.249568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 01:18:47.251465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:18:47.946675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:18:47.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:47.952701 kernel: audit: type=1130 audit(1768353527.945:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:47.958831 (kubelet)[3244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:18:47.992009 kubelet[3244]: E0114 01:18:47.991967 3244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:18:47.993970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:18:47.994105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:18:47.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:47.994503 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110.4M memory peak. Jan 14 01:18:47.999656 kernel: audit: type=1131 audit(1768353527.992:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:54.037367 containerd[2417]: time="2026-01-14T01:18:54.037315298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:54.043602 containerd[2417]: time="2026-01-14T01:18:54.043428349Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19400173" Jan 14 01:18:54.047652 containerd[2417]: time="2026-01-14T01:18:54.047618561Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:54.052342 containerd[2417]: time="2026-01-14T01:18:54.052311456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:54.053041 containerd[2417]: time="2026-01-14T01:18:54.053018531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 15.173233981s" Jan 14 01:18:54.053123 containerd[2417]: time="2026-01-14T01:18:54.053111960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 01:18:54.053897 containerd[2417]: time="2026-01-14T01:18:54.053867678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 01:18:55.170045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850075930.mount: Deactivated successfully. Jan 14 01:18:55.573991 containerd[2417]: time="2026-01-14T01:18:55.573943488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:55.577067 containerd[2417]: time="2026-01-14T01:18:55.576978019Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 01:18:55.580368 containerd[2417]: time="2026-01-14T01:18:55.580341360Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:55.583918 containerd[2417]: time="2026-01-14T01:18:55.583871113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:55.584468 containerd[2417]: time="2026-01-14T01:18:55.584237688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.530333029s" Jan 14 01:18:55.584468 containerd[2417]: time="2026-01-14T01:18:55.584269236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 01:18:55.584840 containerd[2417]: time="2026-01-14T01:18:55.584816635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 01:18:56.253616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279415213.mount: Deactivated successfully. Jan 14 01:18:57.286710 containerd[2417]: time="2026-01-14T01:18:57.286663213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:57.289280 containerd[2417]: time="2026-01-14T01:18:57.289245942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570073" Jan 14 01:18:57.292844 containerd[2417]: time="2026-01-14T01:18:57.292807085Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:57.305523 containerd[2417]: time="2026-01-14T01:18:57.305311501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:18:57.306280 containerd[2417]: time="2026-01-14T01:18:57.305953079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.721108299s" Jan 14 01:18:57.306280 containerd[2417]: time="2026-01-14T01:18:57.305982509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 01:18:57.306411 containerd[2417]: time="2026-01-14T01:18:57.306394170Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:18:57.831529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736029650.mount: Deactivated successfully. Jan 14 01:18:57.858842 containerd[2417]: time="2026-01-14T01:18:57.858805797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:18:57.862113 containerd[2417]: time="2026-01-14T01:18:57.862009419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:18:57.865335 containerd[2417]: time="2026-01-14T01:18:57.865309703Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:18:57.871336 containerd[2417]: time="2026-01-14T01:18:57.871242508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:18:57.871998 containerd[2417]: time="2026-01-14T01:18:57.871677565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 565.247731ms" Jan 14 01:18:57.871998 containerd[2417]: time="2026-01-14T01:18:57.871705455Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:18:57.872129 containerd[2417]: time="2026-01-14T01:18:57.872112471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 01:18:57.999766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 01:18:58.001262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:18:59.905559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:18:59.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:59.911656 kernel: audit: type=1130 audit(1768353539.905:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:18:59.918825 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:18:59.950500 kubelet[3329]: E0114 01:18:59.950456 3329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:18:59.951915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:18:59.952059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:18:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:18:59.952388 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.3M memory peak. Jan 14 01:18:59.956661 kernel: audit: type=1131 audit(1768353539.951:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:19:09.999739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 01:19:10.001216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:15.798759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:15.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.804711 kernel: audit: type=1130 audit(1768353555.797:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.814909 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:19:15.850184 kubelet[3344]: E0114 01:19:15.850139 3344 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:19:15.851593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:19:15.851734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:19:15.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:19:15.852075 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.6M memory peak. Jan 14 01:19:15.857832 kernel: audit: type=1131 audit(1768353555.850:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:19:16.030033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287199385.mount: Deactivated successfully. Jan 14 01:19:17.438459 containerd[2417]: time="2026-01-14T01:19:17.438410777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:17.442244 containerd[2417]: time="2026-01-14T01:19:17.442099747Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57674024" Jan 14 01:19:17.445516 containerd[2417]: time="2026-01-14T01:19:17.445492381Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:17.450226 containerd[2417]: time="2026-01-14T01:19:17.450197155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:17.451664 containerd[2417]: time="2026-01-14T01:19:17.450986926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 19.578846879s" Jan 14 01:19:17.451664 containerd[2417]: time="2026-01-14T01:19:17.451017088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 01:19:19.580445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:19.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.580616 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.6M memory peak. Jan 14 01:19:19.585890 kernel: audit: type=1130 audit(1768353559.579:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.586106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:19.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.592661 kernel: audit: type=1131 audit(1768353559.579:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.612461 systemd[1]: Reload requested from client PID 3433 ('systemctl') (unit session-10.scope)... Jan 14 01:19:19.612578 systemd[1]: Reloading... Jan 14 01:19:19.720301 zram_generator::config[3479]: No configuration found. Jan 14 01:19:19.918334 systemd[1]: Reloading finished in 305 ms. Jan 14 01:19:19.944502 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:19:19.944717 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:19:19.945223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:19.945357 systemd[1]: kubelet.service: Consumed 72ms CPU time, 70.2M memory peak. Jan 14 01:19:19.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:19:19.950667 kernel: audit: type=1130 audit(1768353559.944:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:19:19.951087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:19.951000 audit: BPF prog-id=87 op=LOAD Jan 14 01:19:19.953000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:19:19.956376 kernel: audit: type=1334 audit(1768353559.951:322): prog-id=87 op=LOAD Jan 14 01:19:19.956419 kernel: audit: type=1334 audit(1768353559.953:323): prog-id=80 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=88 op=LOAD Jan 14 01:19:19.958263 kernel: audit: type=1334 audit(1768353559.954:324): prog-id=88 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=89 op=LOAD Jan 14 01:19:19.959691 kernel: audit: type=1334 audit(1768353559.954:325): prog-id=89 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:19:19.961218 kernel: audit: type=1334 audit(1768353559.954:326): prog-id=81 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=90 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=91 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=92 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=93 op=LOAD Jan 14 01:19:19.954000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:19:19.954000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:19:19.956000 audit: BPF prog-id=94 op=LOAD Jan 14 01:19:19.962000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:19:19.963000 audit: BPF prog-id=95 op=LOAD Jan 14 01:19:19.963000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:19:19.964000 audit: BPF prog-id=96 op=LOAD Jan 14 01:19:19.964000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:19:19.964000 audit: BPF prog-id=97 op=LOAD Jan 14 01:19:19.964000 audit: BPF prog-id=98 op=LOAD Jan 14 01:19:19.964000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:19:19.964000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:19:19.965000 audit: BPF prog-id=99 op=LOAD Jan 14 01:19:19.965000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:19:19.965000 audit: BPF prog-id=100 op=LOAD Jan 14 01:19:19.965000 audit: BPF prog-id=101 op=LOAD Jan 14 01:19:19.965000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:19:19.965000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:19:19.966000 audit: BPF prog-id=102 op=LOAD Jan 14 01:19:19.966000 audit: BPF prog-id=103 op=LOAD Jan 14 01:19:19.966000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:19:19.966000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:19:19.966000 audit: BPF prog-id=104 op=LOAD Jan 14 01:19:19.966000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:19:19.966000 audit: BPF prog-id=105 op=LOAD Jan 14 01:19:19.966000 audit: BPF prog-id=106 op=LOAD Jan 14 01:19:19.966000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:19:19.966000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:19:20.598144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:20.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:20.605869 (kubelet)[3550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:19:20.642665 kubelet[3550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:19:20.642665 kubelet[3550]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:19:20.642665 kubelet[3550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:19:20.642665 kubelet[3550]: I0114 01:19:20.641284 3550 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:19:20.887063 kubelet[3550]: I0114 01:19:20.886959 3550 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:19:20.887063 kubelet[3550]: I0114 01:19:20.886999 3550 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:19:20.887571 kubelet[3550]: I0114 01:19:20.887240 3550 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:19:20.915000 kubelet[3550]: E0114 01:19:20.914971 3550 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:20.918260 kubelet[3550]: I0114 01:19:20.917702 3550 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:19:20.925550 kubelet[3550]: I0114 01:19:20.925529 3550 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:19:20.928197 kubelet[3550]: I0114 01:19:20.928178 3550 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:19:20.928397 kubelet[3550]: I0114 01:19:20.928368 3550 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:19:20.928541 kubelet[3550]: I0114 01:19:20.928394 3550 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-dbef80f9ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:19:20.928691 kubelet[3550]: I0114 01:19:20.928547 3550 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:19:20.928691 kubelet[3550]: I0114 01:19:20.928560 3550 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:19:20.928691 kubelet[3550]: I0114 01:19:20.928675 3550 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:19:20.932401 kubelet[3550]: I0114 01:19:20.932388 3550 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:19:20.932466 kubelet[3550]: I0114 01:19:20.932413 3550 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:19:20.932466 kubelet[3550]: I0114 01:19:20.932435 3550 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:19:20.932466 kubelet[3550]: I0114 01:19:20.932448 3550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:19:20.941305 kubelet[3550]: W0114 01:19:20.941251 3550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Jan 14 01:19:20.941377 kubelet[3550]: E0114 01:19:20.941307 3550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:20.941406 kubelet[3550]: W0114 01:19:20.941370 3550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-dbef80f9ad&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Jan 14 01:19:20.941406 kubelet[3550]: E0114 01:19:20.941398 3550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-dbef80f9ad&limit=500&resourceVersion=0\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:20.942318 kubelet[3550]: I0114 01:19:20.941466 3550 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:19:20.942318 kubelet[3550]: I0114 01:19:20.941857 3550 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:19:20.942554 kubelet[3550]: W0114 01:19:20.942540 3550 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:19:20.944604 kubelet[3550]: I0114 01:19:20.944581 3550 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:19:20.944686 kubelet[3550]: I0114 01:19:20.944616 3550 server.go:1287] "Started kubelet" Jan 14 01:19:20.945665 kubelet[3550]: I0114 01:19:20.944747 3550 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:19:20.945665 kubelet[3550]: I0114 01:19:20.945589 3550 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:19:20.948061 kubelet[3550]: I0114 01:19:20.948045 3550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:19:20.948186 kubelet[3550]: I0114 01:19:20.948144 3550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:19:20.948332 kubelet[3550]: I0114 01:19:20.948317 3550 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:19:20.949843 kubelet[3550]: E0114 01:19:20.948466 3550 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578.0.0-p-dbef80f9ad.188a7431cd6651a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-dbef80f9ad,UID:ci-4578.0.0-p-dbef80f9ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-dbef80f9ad,},FirstTimestamp:2026-01-14 01:19:20.944595362 +0000 UTC m=+0.335601488,LastTimestamp:2026-01-14 01:19:20.944595362 +0000 UTC m=+0.335601488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-dbef80f9ad,}" Jan 14 01:19:20.950000 audit[3561]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.952874 kubelet[3550]: I0114 01:19:20.952859 3550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:19:20.953260 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 01:19:20.953327 kernel: audit: type=1325 audit(1768353560.950:363): table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.954516 kubelet[3550]: I0114 01:19:20.954502 3550 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:19:20.956425 kubelet[3550]: E0114 01:19:20.956400 3550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" Jan 14 01:19:20.965654 kernel: audit: type=1300 audit(1768353560.950:363): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb3447450 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.950000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb3447450 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.965816 kubelet[3550]: I0114 01:19:20.958560 3550 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:19:20.965816 kubelet[3550]: I0114 01:19:20.958604 3550 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:19:20.965816 kubelet[3550]: W0114 01:19:20.960657 3550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Jan 14 01:19:20.965816 kubelet[3550]: E0114 01:19:20.960701 3550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:20.965816 kubelet[3550]: E0114 01:19:20.960782 3550 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:19:20.965816 kubelet[3550]: E0114 01:19:20.960844 3550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-dbef80f9ad?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="200ms" Jan 14 01:19:20.965816 kubelet[3550]: I0114 01:19:20.962115 3550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:19:20.965816 kubelet[3550]: I0114 01:19:20.963371 3550 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:19:20.965816 kubelet[3550]: I0114 01:19:20.963379 3550 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:19:20.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:19:20.976660 kernel: audit: type=1327 audit(1768353560.950:363): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:19:20.976716 kernel: audit: type=1325 audit(1768353560.965:364): table=filter:46 family=2 entries=1 op=nft_register_chain pid=3562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.965000 audit[3562]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.965000 audit[3562]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc680cee10 a2=0 a3=0 items=0 ppid=3550 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:19:20.985752 kubelet[3550]: I0114 01:19:20.985399 3550 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:19:20.985752 kubelet[3550]: I0114 01:19:20.985408 3550 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:19:20.985752 kubelet[3550]: I0114 01:19:20.985423 3550 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:19:20.986910 kernel: audit: type=1300 audit(1768353560.965:364): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc680cee10 a2=0 a3=0 items=0 ppid=3550 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.986952 kernel: audit: type=1327 audit(1768353560.965:364): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:19:20.975000 audit[3566]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.975000 audit[3566]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd9d18620 a2=0 a3=0 items=0 ppid=3550 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.993822 kubelet[3550]: I0114 01:19:20.992929 3550 policy_none.go:49] "None policy: Start" Jan 14 01:19:20.993822 kubelet[3550]: I0114 01:19:20.993002 3550 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:19:20.993822 kubelet[3550]: I0114 01:19:20.993072 3550 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:19:20.994041 kernel: audit: type=1325 audit(1768353560.975:365): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.994068 kernel: audit: type=1300 audit(1768353560.975:365): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd9d18620 a2=0 a3=0 items=0 ppid=3550 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:19:20.996626 kernel: audit: type=1327 audit(1768353560.975:365): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:19:20.978000 audit[3568]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.999478 kernel: audit: type=1325 audit(1768353560.978:366): table=filter:48 family=2 entries=2 op=nft_register_chain pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:20.978000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff24c656d0 a2=0 a3=0 items=0 ppid=3550 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:20.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:19:21.003964 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:19:21.010000 audit[3572]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:21.010000 audit[3572]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc0a16fe80 a2=0 a3=0 items=0 ppid=3550 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:19:21.011587 kubelet[3550]: I0114 01:19:21.011412 3550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:19:21.012000 audit[3573]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:21.012000 audit[3573]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd6e0f5640 a2=0 a3=0 items=0 ppid=3550 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:19:21.015266 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:19:21.016451 kubelet[3550]: I0114 01:19:21.016430 3550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:19:21.016514 kubelet[3550]: I0114 01:19:21.016455 3550 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:19:21.016514 kubelet[3550]: I0114 01:19:21.016469 3550 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:19:21.016514 kubelet[3550]: I0114 01:19:21.016476 3550 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:19:21.016000 audit[3574]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:21.016000 audit[3574]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4d496ef0 a2=0 a3=0 items=0 ppid=3550 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:19:21.017000 audit[3577]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:21.017000 audit[3577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea3c43d00 a2=0 a3=0 items=0 ppid=3550 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:19:21.019671 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:19:21.019934 kubelet[3550]: E0114 01:19:21.019907 3550 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:19:21.020917 kubelet[3550]: W0114 01:19:21.020881 3550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Jan 14 01:19:21.020986 kubelet[3550]: E0114 01:19:21.020928 3550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:21.022000 audit[3579]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:21.022000 audit[3578]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:21.022000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c61ae60 a2=0 a3=0 items=0 ppid=3550 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.022000 audit[3579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff24372020 a2=0 a3=0 items=0 ppid=3550 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:19:21.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:19:21.023000 audit[3580]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_chain pid=3580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:21.023000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6503d0c0 a2=0 a3=0 items=0 ppid=3550 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:19:21.023000 audit[3581]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:21.023000 audit[3581]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2b8c82b0 a2=0 a3=0 items=0 ppid=3550 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:21.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:19:21.025269 kubelet[3550]: I0114 01:19:21.025245 3550 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:19:21.025399 kubelet[3550]: I0114 01:19:21.025385 3550 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:19:21.025439 kubelet[3550]: I0114 01:19:21.025397 3550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:19:21.025622 kubelet[3550]: I0114 01:19:21.025611 3550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:19:21.026835 kubelet[3550]: E0114 01:19:21.026815 3550 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:19:21.026900 kubelet[3550]: E0114 01:19:21.026862 3550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578.0.0-p-dbef80f9ad\" not found" Jan 14 01:19:21.130720 kubelet[3550]: I0114 01:19:21.130391 3550 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.131175 kubelet[3550]: E0114 01:19:21.131148 3550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.131389 systemd[1]: Created slice kubepods-burstable-pod141ff9ad7f0b37e7a156600ce77dbaba.slice - libcontainer container kubepods-burstable-pod141ff9ad7f0b37e7a156600ce77dbaba.slice. Jan 14 01:19:21.137576 kubelet[3550]: E0114 01:19:21.137508 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.141192 systemd[1]: Created slice kubepods-burstable-pod7a10f40b63a2a614fa694f9286a68124.slice - libcontainer container kubepods-burstable-pod7a10f40b63a2a614fa694f9286a68124.slice. Jan 14 01:19:21.158481 kubelet[3550]: E0114 01:19:21.158456 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160767 kubelet[3550]: I0114 01:19:21.160749 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160830 kubelet[3550]: I0114 01:19:21.160779 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160830 kubelet[3550]: I0114 01:19:21.160798 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160881 kubelet[3550]: I0114 01:19:21.160846 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160881 kubelet[3550]: I0114 01:19:21.160865 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160933 kubelet[3550]: I0114 01:19:21.160884 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160933 kubelet[3550]: I0114 01:19:21.160914 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160979 kubelet[3550]: I0114 01:19:21.160933 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.160979 kubelet[3550]: I0114 01:19:21.160953 3550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9737488ab4b98d787f7666b61963445-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-dbef80f9ad\" (UID: \"e9737488ab4b98d787f7666b61963445\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.161241 systemd[1]: Created slice kubepods-burstable-pode9737488ab4b98d787f7666b61963445.slice - libcontainer container kubepods-burstable-pode9737488ab4b98d787f7666b61963445.slice. Jan 14 01:19:21.161882 kubelet[3550]: E0114 01:19:21.161475 3550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-dbef80f9ad?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="400ms" Jan 14 01:19:21.163285 kubelet[3550]: E0114 01:19:21.163265 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.332887 kubelet[3550]: I0114 01:19:21.332690 3550 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.333031 kubelet[3550]: E0114 01:19:21.333011 3550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.440449 containerd[2417]: time="2026-01-14T01:19:21.440342344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-dbef80f9ad,Uid:141ff9ad7f0b37e7a156600ce77dbaba,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:21.460216 containerd[2417]: time="2026-01-14T01:19:21.460180988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-dbef80f9ad,Uid:7a10f40b63a2a614fa694f9286a68124,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:21.464728 containerd[2417]: time="2026-01-14T01:19:21.464684191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-dbef80f9ad,Uid:e9737488ab4b98d787f7666b61963445,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:21.562705 kubelet[3550]: E0114 01:19:21.562662 3550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-dbef80f9ad?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="800ms" Jan 14 01:19:21.735172 kubelet[3550]: I0114 01:19:21.734843 3550 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.735172 kubelet[3550]: E0114 01:19:21.735159 3550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:21.945888 containerd[2417]: time="2026-01-14T01:19:21.945811068Z" level=info msg="connecting to shim 74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192" address="unix:///run/containerd/s/b9247f92f89e8be0a15f850507b1d6798f1b0782c72a12af71a1f32063b58478" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:21.946817 containerd[2417]: time="2026-01-14T01:19:21.946754056Z" level=info msg="connecting to shim 3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030" address="unix:///run/containerd/s/cebc522813cfed62b6a931c9ed81aa9bbc86dd7fe668f84bcf267c1501695f13" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:21.968650 containerd[2417]: time="2026-01-14T01:19:21.967972828Z" level=info msg="connecting to shim 00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e" address="unix:///run/containerd/s/ab8069bb2c1d62606516101c870c6dc40f45f4ea55206f176da6c758d7a145e9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:21.984890 systemd[1]: Started cri-containerd-74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192.scope - libcontainer container 74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192. Jan 14 01:19:21.990139 systemd[1]: Started cri-containerd-3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030.scope - libcontainer container 3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030. Jan 14 01:19:22.004000 audit: BPF prog-id=107 op=LOAD Jan 14 01:19:22.008000 audit: BPF prog-id=108 op=LOAD Jan 14 01:19:22.008861 systemd[1]: Started cri-containerd-00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e.scope - libcontainer container 00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e. Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.008000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.008000 audit: BPF prog-id=109 op=LOAD Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.008000 audit: BPF prog-id=110 op=LOAD Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.008000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.008000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:19:22.008000 audit[3627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.009000 audit: BPF prog-id=111 op=LOAD Jan 14 01:19:22.009000 audit[3627]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734626634643237616530393933306466613464323930393235623238 Jan 14 01:19:22.011000 audit: BPF prog-id=112 op=LOAD Jan 14 01:19:22.011000 audit: BPF prog-id=113 op=LOAD Jan 14 01:19:22.011000 audit[3624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.012000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:19:22.012000 audit[3624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.012000 audit: BPF prog-id=114 op=LOAD Jan 14 01:19:22.012000 audit[3624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.012000 audit: BPF prog-id=115 op=LOAD Jan 14 01:19:22.012000 audit[3624]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.013000 audit: BPF prog-id=115 op=UNLOAD Jan 14 01:19:22.013000 audit[3624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.014000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:19:22.014000 audit[3624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.014000 audit: BPF prog-id=116 op=LOAD Jan 14 01:19:22.014000 audit[3624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=3601 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383563376631376464626166376161373832663365373566383334 Jan 14 01:19:22.021000 audit: BPF prog-id=117 op=LOAD Jan 14 01:19:22.022000 audit: BPF prog-id=118 op=LOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=118 op=UNLOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=119 op=LOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=120 op=LOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=120 op=UNLOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=119 op=UNLOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.022000 audit: BPF prog-id=121 op=LOAD Jan 14 01:19:22.022000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3633 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030636231363361646362653563653133376135663639393033626666 Jan 14 01:19:22.064384 containerd[2417]: time="2026-01-14T01:19:22.064272970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-dbef80f9ad,Uid:141ff9ad7f0b37e7a156600ce77dbaba,Namespace:kube-system,Attempt:0,} returns sandbox id \"74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192\"" Jan 14 01:19:22.068237 containerd[2417]: time="2026-01-14T01:19:22.068215161Z" level=info msg="CreateContainer within sandbox \"74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:19:22.083412 containerd[2417]: time="2026-01-14T01:19:22.083387364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-dbef80f9ad,Uid:e9737488ab4b98d787f7666b61963445,Namespace:kube-system,Attempt:0,} returns sandbox id \"00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e\"" Jan 14 01:19:22.087475 containerd[2417]: time="2026-01-14T01:19:22.087451532Z" level=info msg="CreateContainer within sandbox \"00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:19:22.099419 containerd[2417]: time="2026-01-14T01:19:22.099393833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-dbef80f9ad,Uid:7a10f40b63a2a614fa694f9286a68124,Namespace:kube-system,Attempt:0,} returns sandbox id \"3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030\"" Jan 14 01:19:22.100948 containerd[2417]: time="2026-01-14T01:19:22.100914274Z" level=info msg="CreateContainer within sandbox \"3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:19:22.114881 containerd[2417]: time="2026-01-14T01:19:22.114857461Z" level=info msg="Container bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:22.145709 containerd[2417]: time="2026-01-14T01:19:22.145684159Z" level=info msg="Container 457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:22.171206 containerd[2417]: time="2026-01-14T01:19:22.171177362Z" level=info msg="Container 5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:22.190694 containerd[2417]: time="2026-01-14T01:19:22.190668319Z" level=info msg="CreateContainer within sandbox \"74bf4d27ae09930dfa4d290925b284cd0bfaadf11203039b0237f14642480192\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4\"" Jan 14 01:19:22.191102 containerd[2417]: time="2026-01-14T01:19:22.191079228Z" level=info msg="StartContainer for \"bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4\"" Jan 14 01:19:22.191801 containerd[2417]: time="2026-01-14T01:19:22.191768358Z" level=info msg="connecting to shim bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4" address="unix:///run/containerd/s/b9247f92f89e8be0a15f850507b1d6798f1b0782c72a12af71a1f32063b58478" protocol=ttrpc version=3 Jan 14 01:19:22.207946 containerd[2417]: time="2026-01-14T01:19:22.207923970Z" level=info msg="CreateContainer within sandbox \"3885c7f17ddbaf7aa782f3e75f834a9ff93b8888f049ed541d4106f10641e030\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82\"" Jan 14 01:19:22.208289 containerd[2417]: time="2026-01-14T01:19:22.208266235Z" level=info msg="StartContainer for \"5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82\"" Jan 14 01:19:22.208830 systemd[1]: Started cri-containerd-bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4.scope - libcontainer container bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4. Jan 14 01:19:22.209097 containerd[2417]: time="2026-01-14T01:19:22.209003721Z" level=info msg="connecting to shim 5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82" address="unix:///run/containerd/s/cebc522813cfed62b6a931c9ed81aa9bbc86dd7fe668f84bcf267c1501695f13" protocol=ttrpc version=3 Jan 14 01:19:22.219498 containerd[2417]: time="2026-01-14T01:19:22.219463869Z" level=info msg="CreateContainer within sandbox \"00cb163adcbe5ce137a5f69903bffd84c2f6777bb6899c2efd3d8d5104494c1e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4\"" Jan 14 01:19:22.222674 containerd[2417]: time="2026-01-14T01:19:22.221817603Z" level=info msg="StartContainer for \"457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4\"" Jan 14 01:19:22.224201 containerd[2417]: time="2026-01-14T01:19:22.222940831Z" level=info msg="connecting to shim 457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4" address="unix:///run/containerd/s/ab8069bb2c1d62606516101c870c6dc40f45f4ea55206f176da6c758d7a145e9" protocol=ttrpc version=3 Jan 14 01:19:22.229802 systemd[1]: Started cri-containerd-5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82.scope - libcontainer container 5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82. Jan 14 01:19:22.231000 audit: BPF prog-id=122 op=LOAD Jan 14 01:19:22.232000 audit: BPF prog-id=123 op=LOAD Jan 14 01:19:22.232000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.232000 audit: BPF prog-id=123 op=UNLOAD Jan 14 01:19:22.232000 audit[3723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.232000 audit: BPF prog-id=124 op=LOAD Jan 14 01:19:22.232000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.233000 audit: BPF prog-id=125 op=LOAD Jan 14 01:19:22.233000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.233000 audit: BPF prog-id=125 op=UNLOAD Jan 14 01:19:22.233000 audit[3723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.233000 audit: BPF prog-id=124 op=UNLOAD Jan 14 01:19:22.233000 audit[3723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.234000 audit: BPF prog-id=126 op=LOAD Jan 14 01:19:22.234000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3597 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613762643537626537663339356632633364353939393036663337 Jan 14 01:19:22.241954 systemd[1]: Started cri-containerd-457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4.scope - libcontainer container 457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4. Jan 14 01:19:22.250669 kubelet[3550]: W0114 01:19:22.250180 3550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Jan 14 01:19:22.252655 kubelet[3550]: E0114 01:19:22.250936 3550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:19:22.253000 audit: BPF prog-id=127 op=LOAD Jan 14 01:19:22.253000 audit: BPF prog-id=128 op=LOAD Jan 14 01:19:22.253000 audit[3736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=128 op=UNLOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=129 op=LOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=130 op=LOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=130 op=UNLOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=129 op=UNLOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.254000 audit: BPF prog-id=131 op=LOAD Jan 14 01:19:22.254000 audit[3736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3601 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353331343331383262373565643863636330323932343361633835 Jan 14 01:19:22.261000 audit: BPF prog-id=132 op=LOAD Jan 14 01:19:22.262000 audit: BPF prog-id=133 op=LOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=134 op=LOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=135 op=LOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=135 op=UNLOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.262000 audit: BPF prog-id=136 op=LOAD Jan 14 01:19:22.262000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3633 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:22.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373632336632326639353961656136666466663236346239646663 Jan 14 01:19:22.296111 containerd[2417]: time="2026-01-14T01:19:22.296086532Z" level=info msg="StartContainer for \"bfa7bd57be7f395f2c3d599906f371c4a36efad0c096e1ac7397549a693583b4\" returns successfully" Jan 14 01:19:22.310583 containerd[2417]: time="2026-01-14T01:19:22.310553890Z" level=info msg="StartContainer for \"5553143182b75ed8ccc029243ac85b8bae69fccbdc551b2bcfd8150bd59daf82\" returns successfully" Jan 14 01:19:22.375156 containerd[2417]: time="2026-01-14T01:19:22.374805280Z" level=info msg="StartContainer for \"457623f22f959aea6fdff264b9dfc2560bf856ff25acdfa682be7fd1dbc34cd4\" returns successfully" Jan 14 01:19:22.537767 kubelet[3550]: I0114 01:19:22.537740 3550 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.032056 kubelet[3550]: E0114 01:19:23.032026 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.043035 kubelet[3550]: E0114 01:19:23.043009 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.051501 kubelet[3550]: E0114 01:19:23.051481 3550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.784314 kubelet[3550]: E0114 01:19:23.784271 3550 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578.0.0-p-dbef80f9ad\" not found" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.868125 kubelet[3550]: I0114 01:19:23.868096 3550 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:23.940866 kubelet[3550]: I0114 01:19:23.940690 3550 apiserver.go:52] "Watching apiserver" Jan 14 01:19:23.959109 kubelet[3550]: I0114 01:19:23.959084 3550 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:19:23.959187 kubelet[3550]: I0114 01:19:23.959131 3550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.032995 kubelet[3550]: E0114 01:19:24.032964 3550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.032995 kubelet[3550]: I0114 01:19:24.032992 3550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.038073 kubelet[3550]: E0114 01:19:24.037991 3550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.038153 kubelet[3550]: I0114 01:19:24.038075 3550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.040742 kubelet[3550]: E0114 01:19:24.040702 3550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-dbef80f9ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.051191 kubelet[3550]: I0114 01:19:24.050758 3550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.051507 kubelet[3550]: I0114 01:19:24.051497 3550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.053649 kubelet[3550]: E0114 01:19:24.053588 3550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:24.053844 kubelet[3550]: E0114 01:19:24.053822 3550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-dbef80f9ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:26.057381 systemd[1]: Reload requested from client PID 3819 ('systemctl') (unit session-10.scope)... Jan 14 01:19:26.057395 systemd[1]: Reloading... Jan 14 01:19:26.167662 zram_generator::config[3865]: No configuration found. Jan 14 01:19:26.405655 systemd[1]: Reloading finished in 347 ms. Jan 14 01:19:26.436169 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:26.454309 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:19:26.454565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:26.463148 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 14 01:19:26.463195 kernel: audit: type=1131 audit(1768353566.453:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.454625 systemd[1]: kubelet.service: Consumed 630ms CPU time, 130.6M memory peak. Jan 14 01:19:26.457871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:26.455000 audit: BPF prog-id=137 op=LOAD Jan 14 01:19:26.455000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:19:26.468010 kernel: audit: type=1334 audit(1768353566.455:424): prog-id=137 op=LOAD Jan 14 01:19:26.468039 kernel: audit: type=1334 audit(1768353566.455:425): prog-id=104 op=UNLOAD Jan 14 01:19:26.455000 audit: BPF prog-id=138 op=LOAD Jan 14 01:19:26.469503 kernel: audit: type=1334 audit(1768353566.455:426): prog-id=138 op=LOAD Jan 14 01:19:26.455000 audit: BPF prog-id=139 op=LOAD Jan 14 01:19:26.471156 kernel: audit: type=1334 audit(1768353566.455:427): prog-id=139 op=LOAD Jan 14 01:19:26.455000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:19:26.473679 kernel: audit: type=1334 audit(1768353566.455:428): prog-id=105 op=UNLOAD Jan 14 01:19:26.455000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:19:26.461000 audit: BPF prog-id=140 op=LOAD Jan 14 01:19:26.478705 kernel: audit: type=1334 audit(1768353566.455:429): prog-id=106 op=UNLOAD Jan 14 01:19:26.478754 kernel: audit: type=1334 audit(1768353566.461:430): prog-id=140 op=LOAD Jan 14 01:19:26.461000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:19:26.481745 kernel: audit: type=1334 audit(1768353566.461:431): prog-id=99 op=UNLOAD Jan 14 01:19:26.462000 audit: BPF prog-id=141 op=LOAD Jan 14 01:19:26.462000 audit: BPF prog-id=142 op=LOAD Jan 14 01:19:26.462000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:19:26.462000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:19:26.463000 audit: BPF prog-id=143 op=LOAD Jan 14 01:19:26.463000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=144 op=LOAD Jan 14 01:19:26.465000 audit: BPF prog-id=96 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=145 op=LOAD Jan 14 01:19:26.465000 audit: BPF prog-id=146 op=LOAD Jan 14 01:19:26.465000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=147 op=LOAD Jan 14 01:19:26.465000 audit: BPF prog-id=148 op=LOAD Jan 14 01:19:26.483646 kernel: audit: type=1334 audit(1768353566.462:432): prog-id=141 op=LOAD Jan 14 01:19:26.465000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:19:26.465000 audit: BPF prog-id=149 op=LOAD Jan 14 01:19:26.470000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:19:26.470000 audit: BPF prog-id=150 op=LOAD Jan 14 01:19:26.470000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:19:26.470000 audit: BPF prog-id=151 op=LOAD Jan 14 01:19:26.470000 audit: BPF prog-id=152 op=LOAD Jan 14 01:19:26.470000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:19:26.470000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:19:26.472000 audit: BPF prog-id=153 op=LOAD Jan 14 01:19:26.472000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:19:26.473000 audit: BPF prog-id=154 op=LOAD Jan 14 01:19:26.473000 audit: BPF prog-id=155 op=LOAD Jan 14 01:19:26.473000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:19:26.473000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:19:26.474000 audit: BPF prog-id=156 op=LOAD Jan 14 01:19:26.474000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:19:26.961804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:26.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.966107 (kubelet)[3936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:19:27.006654 kubelet[3936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:19:27.006654 kubelet[3936]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:19:27.006654 kubelet[3936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:19:27.007153 kubelet[3936]: I0114 01:19:27.007095 3936 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:19:27.012229 kubelet[3936]: I0114 01:19:27.012208 3936 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:19:27.012229 kubelet[3936]: I0114 01:19:27.012225 3936 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:19:27.012441 kubelet[3936]: I0114 01:19:27.012428 3936 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:19:27.013289 kubelet[3936]: I0114 01:19:27.013272 3936 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 01:19:27.015656 kubelet[3936]: I0114 01:19:27.015620 3936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:19:27.019057 kubelet[3936]: I0114 01:19:27.019041 3936 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:19:27.022333 kubelet[3936]: I0114 01:19:27.022209 3936 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:19:27.022414 kubelet[3936]: I0114 01:19:27.022388 3936 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:19:27.022705 kubelet[3936]: I0114 01:19:27.022412 3936 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-dbef80f9ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:19:27.022814 kubelet[3936]: I0114 01:19:27.022719 3936 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:19:27.022814 kubelet[3936]: I0114 01:19:27.022728 3936 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:19:27.022814 kubelet[3936]: I0114 01:19:27.022770 3936 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:19:27.022962 kubelet[3936]: I0114 01:19:27.022888 3936 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:19:27.022962 kubelet[3936]: I0114 01:19:27.022906 3936 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:19:27.022962 kubelet[3936]: I0114 01:19:27.022925 3936 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:19:27.022962 kubelet[3936]: I0114 01:19:27.022935 3936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:19:27.026280 kubelet[3936]: I0114 01:19:27.026222 3936 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:19:27.026805 kubelet[3936]: I0114 01:19:27.026728 3936 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:19:27.027243 kubelet[3936]: I0114 01:19:27.027233 3936 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:19:27.027333 kubelet[3936]: I0114 01:19:27.027328 3936 server.go:1287] "Started kubelet" Jan 14 01:19:27.029400 kubelet[3936]: I0114 01:19:27.029308 3936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:19:27.038539 kubelet[3936]: I0114 01:19:27.038469 3936 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:19:27.040511 kubelet[3936]: I0114 01:19:27.039475 3936 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:19:27.040511 kubelet[3936]: I0114 01:19:27.040258 3936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:19:27.040511 kubelet[3936]: I0114 01:19:27.040413 3936 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:19:27.040779 kubelet[3936]: I0114 01:19:27.040768 3936 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:19:27.042588 kubelet[3936]: I0114 01:19:27.042577 3936 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:19:27.042883 kubelet[3936]: E0114 01:19:27.042863 3936 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-dbef80f9ad\" not found" Jan 14 01:19:27.043252 kubelet[3936]: I0114 01:19:27.043241 3936 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:19:27.044912 kubelet[3936]: I0114 01:19:27.044900 3936 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:19:27.046405 kubelet[3936]: I0114 01:19:27.046385 3936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:19:27.047455 kubelet[3936]: I0114 01:19:27.047440 3936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:19:27.047528 kubelet[3936]: I0114 01:19:27.047523 3936 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:19:27.047580 kubelet[3936]: I0114 01:19:27.047575 3936 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:19:27.047659 kubelet[3936]: I0114 01:19:27.047646 3936 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:19:27.047730 kubelet[3936]: E0114 01:19:27.047718 3936 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:19:27.052695 kubelet[3936]: I0114 01:19:27.051027 3936 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:19:27.052695 kubelet[3936]: I0114 01:19:27.051135 3936 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:19:27.061243 kubelet[3936]: I0114 01:19:27.061001 3936 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:19:27.066006 kubelet[3936]: E0114 01:19:27.065982 3936 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:19:27.106700 kubelet[3936]: I0114 01:19:27.106677 3936 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:19:27.106700 kubelet[3936]: I0114 01:19:27.106692 3936 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:19:27.106804 kubelet[3936]: I0114 01:19:27.106706 3936 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:19:27.106854 kubelet[3936]: I0114 01:19:27.106837 3936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:19:27.106889 kubelet[3936]: I0114 01:19:27.106851 3936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:19:27.106889 kubelet[3936]: I0114 01:19:27.106867 3936 policy_none.go:49] "None policy: Start" Jan 14 01:19:27.106889 kubelet[3936]: I0114 01:19:27.106876 3936 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:19:27.106889 kubelet[3936]: I0114 01:19:27.106884 3936 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:19:27.107004 kubelet[3936]: I0114 01:19:27.106978 3936 state_mem.go:75] "Updated machine memory state" Jan 14 01:19:27.112819 kubelet[3936]: I0114 01:19:27.111667 3936 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:19:27.112819 kubelet[3936]: I0114 01:19:27.112399 3936 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:19:27.112819 kubelet[3936]: I0114 01:19:27.112409 3936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:19:27.112819 kubelet[3936]: I0114 01:19:27.112599 3936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:19:27.114583 kubelet[3936]: E0114 01:19:27.114557 3936 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:19:27.149065 kubelet[3936]: I0114 01:19:27.149045 3936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.149745 kubelet[3936]: I0114 01:19:27.149723 3936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.150464 kubelet[3936]: I0114 01:19:27.150381 3936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.157726 kubelet[3936]: W0114 01:19:27.157181 3936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:19:27.165291 kubelet[3936]: W0114 01:19:27.165260 3936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:19:27.165935 kubelet[3936]: W0114 01:19:27.165856 3936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:19:27.215451 kubelet[3936]: I0114 01:19:27.215395 3936 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.229571 kubelet[3936]: I0114 01:19:27.229503 3936 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.229654 kubelet[3936]: I0114 01:19:27.229558 3936 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.245859 kubelet[3936]: I0114 01:19:27.245711 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.245859 kubelet[3936]: I0114 01:19:27.245741 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.245859 kubelet[3936]: I0114 01:19:27.245761 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.245859 kubelet[3936]: I0114 01:19:27.245778 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.245859 kubelet[3936]: I0114 01:19:27.245794 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9737488ab4b98d787f7666b61963445-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-dbef80f9ad\" (UID: \"e9737488ab4b98d787f7666b61963445\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.246037 kubelet[3936]: I0114 01:19:27.245811 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.246037 kubelet[3936]: I0114 01:19:27.245826 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/141ff9ad7f0b37e7a156600ce77dbaba-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" (UID: \"141ff9ad7f0b37e7a156600ce77dbaba\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.246037 kubelet[3936]: I0114 01:19:27.245842 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:27.246037 kubelet[3936]: I0114 01:19:27.245862 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a10f40b63a2a614fa694f9286a68124-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-dbef80f9ad\" (UID: \"7a10f40b63a2a614fa694f9286a68124\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:28.034480 kubelet[3936]: I0114 01:19:28.034444 3936 apiserver.go:52] "Watching apiserver" Jan 14 01:19:28.045558 kubelet[3936]: I0114 01:19:28.045532 3936 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:19:28.086435 kubelet[3936]: I0114 01:19:28.086327 3936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:28.094437 kubelet[3936]: W0114 01:19:28.094419 3936 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:19:28.094611 kubelet[3936]: E0114 01:19:28.094574 3936 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-dbef80f9ad\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" Jan 14 01:19:28.114562 kubelet[3936]: I0114 01:19:28.114290 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578.0.0-p-dbef80f9ad" podStartSLOduration=1.114277007 podStartE2EDuration="1.114277007s" podCreationTimestamp="2026-01-14 01:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:19:28.105833892 +0000 UTC m=+1.136765087" watchObservedRunningTime="2026-01-14 01:19:28.114277007 +0000 UTC m=+1.145208217" Jan 14 01:19:28.114919 kubelet[3936]: I0114 01:19:28.114804 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-dbef80f9ad" podStartSLOduration=1.114791524 podStartE2EDuration="1.114791524s" podCreationTimestamp="2026-01-14 01:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:19:28.114113657 +0000 UTC m=+1.145044855" watchObservedRunningTime="2026-01-14 01:19:28.114791524 +0000 UTC m=+1.145722722" Jan 14 01:19:28.132210 kubelet[3936]: I0114 01:19:28.132171 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578.0.0-p-dbef80f9ad" podStartSLOduration=1.132159524 podStartE2EDuration="1.132159524s" podCreationTimestamp="2026-01-14 01:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:19:28.123202158 +0000 UTC m=+1.154133357" watchObservedRunningTime="2026-01-14 01:19:28.132159524 +0000 UTC m=+1.163090721" Jan 14 01:19:30.818508 kubelet[3936]: I0114 01:19:30.818470 3936 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:19:30.818980 kubelet[3936]: I0114 01:19:30.818955 3936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:19:30.819011 containerd[2417]: time="2026-01-14T01:19:30.818803264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:19:31.471611 systemd[1]: Created slice kubepods-besteffort-podcde7978e_e191_4d34_835a_c09a2e2d794a.slice - libcontainer container kubepods-besteffort-podcde7978e_e191_4d34_835a_c09a2e2d794a.slice. Jan 14 01:19:31.476283 kubelet[3936]: I0114 01:19:31.476252 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde7978e-e191-4d34-835a-c09a2e2d794a-xtables-lock\") pod \"kube-proxy-w8qhx\" (UID: \"cde7978e-e191-4d34-835a-c09a2e2d794a\") " pod="kube-system/kube-proxy-w8qhx" Jan 14 01:19:31.476283 kubelet[3936]: I0114 01:19:31.476286 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g576s\" (UniqueName: \"kubernetes.io/projected/cde7978e-e191-4d34-835a-c09a2e2d794a-kube-api-access-g576s\") pod \"kube-proxy-w8qhx\" (UID: \"cde7978e-e191-4d34-835a-c09a2e2d794a\") " pod="kube-system/kube-proxy-w8qhx" Jan 14 01:19:31.476496 kubelet[3936]: I0114 01:19:31.476308 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cde7978e-e191-4d34-835a-c09a2e2d794a-kube-proxy\") pod \"kube-proxy-w8qhx\" (UID: \"cde7978e-e191-4d34-835a-c09a2e2d794a\") " pod="kube-system/kube-proxy-w8qhx" Jan 14 01:19:31.476496 kubelet[3936]: I0114 01:19:31.476325 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde7978e-e191-4d34-835a-c09a2e2d794a-lib-modules\") pod \"kube-proxy-w8qhx\" (UID: \"cde7978e-e191-4d34-835a-c09a2e2d794a\") " pod="kube-system/kube-proxy-w8qhx" Jan 14 01:19:31.779078 containerd[2417]: time="2026-01-14T01:19:31.779024868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8qhx,Uid:cde7978e-e191-4d34-835a-c09a2e2d794a,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:31.837454 containerd[2417]: time="2026-01-14T01:19:31.837071287Z" level=info msg="connecting to shim 0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c" address="unix:///run/containerd/s/4b3c015188255f9196540b3bbb763d9515f1a6972d98ccae7fc9e746686fd97f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:31.865792 systemd[1]: Started cri-containerd-0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c.scope - libcontainer container 0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c. Jan 14 01:19:31.877251 systemd[1]: Created slice kubepods-besteffort-pod4b2f9d0c_661c_4d82_8e71_29100e2abc13.slice - libcontainer container kubepods-besteffort-pod4b2f9d0c_661c_4d82_8e71_29100e2abc13.slice. Jan 14 01:19:31.880296 kubelet[3936]: I0114 01:19:31.879214 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b2f9d0c-661c-4d82-8e71-29100e2abc13-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5mvsz\" (UID: \"4b2f9d0c-661c-4d82-8e71-29100e2abc13\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mvsz" Jan 14 01:19:31.880989 kubelet[3936]: I0114 01:19:31.880336 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbz7n\" (UniqueName: \"kubernetes.io/projected/4b2f9d0c-661c-4d82-8e71-29100e2abc13-kube-api-access-hbz7n\") pod \"tigera-operator-7dcd859c48-5mvsz\" (UID: \"4b2f9d0c-661c-4d82-8e71-29100e2abc13\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mvsz" Jan 14 01:19:31.895369 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:19:31.895447 kernel: audit: type=1334 audit(1768353571.891:465): prog-id=157 op=LOAD Jan 14 01:19:31.891000 audit: BPF prog-id=157 op=LOAD Jan 14 01:19:31.892000 audit: BPF prog-id=158 op=LOAD Jan 14 01:19:31.903651 kernel: audit: type=1334 audit(1768353571.892:466): prog-id=158 op=LOAD Jan 14 01:19:31.903706 kernel: audit: type=1300 audit(1768353571.892:466): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.915778 kernel: audit: type=1327 audit(1768353571.892:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:19:31.920666 kernel: audit: type=1334 audit(1768353571.892:467): prog-id=158 op=UNLOAD Jan 14 01:19:31.920727 kernel: audit: type=1300 audit(1768353571.892:467): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.929749 kernel: audit: type=1327 audit(1768353571.892:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.929835 containerd[2417]: time="2026-01-14T01:19:31.927140270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8qhx,Uid:cde7978e-e191-4d34-835a-c09a2e2d794a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c\"" Jan 14 01:19:31.892000 audit: BPF prog-id=159 op=LOAD Jan 14 01:19:31.932065 containerd[2417]: time="2026-01-14T01:19:31.931159435Z" level=info msg="CreateContainer within sandbox \"0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:19:31.937574 kernel: audit: type=1334 audit(1768353571.892:468): prog-id=159 op=LOAD Jan 14 01:19:31.937631 kernel: audit: type=1300 audit(1768353571.892:468): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.942658 kernel: audit: type=1327 audit(1768353571.892:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: BPF prog-id=160 op=LOAD Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.892000 audit: BPF prog-id=161 op=LOAD Jan 14 01:19:31.892000 audit[4001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3990 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037353764663431623664343265616362376235663931663964303533 Jan 14 01:19:31.970666 containerd[2417]: time="2026-01-14T01:19:31.969707726Z" level=info msg="Container 62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:31.989657 containerd[2417]: time="2026-01-14T01:19:31.989611281Z" level=info msg="CreateContainer within sandbox \"0757df41b6d42eacb7b5f91f9d0530c7744b0aa36391cb62a3f28ddc75efeb2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43\"" Jan 14 01:19:31.990693 containerd[2417]: time="2026-01-14T01:19:31.990042453Z" level=info msg="StartContainer for \"62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43\"" Jan 14 01:19:31.991324 containerd[2417]: time="2026-01-14T01:19:31.991299798Z" level=info msg="connecting to shim 62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43" address="unix:///run/containerd/s/4b3c015188255f9196540b3bbb763d9515f1a6972d98ccae7fc9e746686fd97f" protocol=ttrpc version=3 Jan 14 01:19:32.008790 systemd[1]: Started cri-containerd-62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43.scope - libcontainer container 62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43. Jan 14 01:19:32.049000 audit: BPF prog-id=162 op=LOAD Jan 14 01:19:32.049000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3990 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632653832396436343531323266376162663132656236366264353234 Jan 14 01:19:32.049000 audit: BPF prog-id=163 op=LOAD Jan 14 01:19:32.049000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3990 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632653832396436343531323266376162663132656236366264353234 Jan 14 01:19:32.049000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:19:32.049000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632653832396436343531323266376162663132656236366264353234 Jan 14 01:19:32.049000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:19:32.049000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3990 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632653832396436343531323266376162663132656236366264353234 Jan 14 01:19:32.049000 audit: BPF prog-id=164 op=LOAD Jan 14 01:19:32.049000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3990 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632653832396436343531323266376162663132656236366264353234 Jan 14 01:19:32.072550 containerd[2417]: time="2026-01-14T01:19:32.072353045Z" level=info msg="StartContainer for \"62e829d645122f7abf12eb66bd5249a0c101a44e3cd29c1290dce7c59d6a2a43\" returns successfully" Jan 14 01:19:32.170000 audit[4091]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.170000 audit[4091]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe997ab540 a2=0 a3=7ffe997ab52c items=0 ppid=4040 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:19:32.174000 audit[4092]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.174000 audit[4092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffced0220a0 a2=0 a3=7ffced02208c items=0 ppid=4040 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:19:32.174000 audit[4094]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.174000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe07697270 a2=0 a3=7ffe0769725c items=0 ppid=4040 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:19:32.176000 audit[4095]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=4095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.176000 audit[4095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe6647500 a2=0 a3=7fffe66474ec items=0 ppid=4040 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:19:32.177000 audit[4096]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.177000 audit[4096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc761279b0 a2=0 a3=7ffc7612799c items=0 ppid=4040 pid=4096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:19:32.178000 audit[4097]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.178000 audit[4097]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea55d78a0 a2=0 a3=7ffea55d788c items=0 ppid=4040 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:19:32.184538 containerd[2417]: time="2026-01-14T01:19:32.184419154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mvsz,Uid:4b2f9d0c-661c-4d82-8e71-29100e2abc13,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:19:32.232971 containerd[2417]: time="2026-01-14T01:19:32.232937658Z" level=info msg="connecting to shim ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84" address="unix:///run/containerd/s/5f1b4c7056a28f8dc6606e251641ba04435c3b86acd4007a52c798f8047e1639" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:32.253800 systemd[1]: Started cri-containerd-ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84.scope - libcontainer container ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84. Jan 14 01:19:32.261000 audit: BPF prog-id=165 op=LOAD Jan 14 01:19:32.261000 audit: BPF prog-id=166 op=LOAD Jan 14 01:19:32.261000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.261000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:19:32.261000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.261000 audit: BPF prog-id=167 op=LOAD Jan 14 01:19:32.261000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.261000 audit: BPF prog-id=168 op=LOAD Jan 14 01:19:32.261000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.262000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:19:32.262000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.262000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:19:32.262000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.262000 audit: BPF prog-id=169 op=LOAD Jan 14 01:19:32.262000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4107 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383663333066343230636537373330633332623032646333316236 Jan 14 01:19:32.273000 audit[4137]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.273000 audit[4137]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd3df9bdf0 a2=0 a3=7ffd3df9bddc items=0 ppid=4040 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:19:32.276000 audit[4139]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.276000 audit[4139]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff42c09f10 a2=0 a3=7fff42c09efc items=0 ppid=4040 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:19:32.280000 audit[4142]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.280000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb70a6380 a2=0 a3=7ffdb70a636c items=0 ppid=4040 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:19:32.281000 audit[4143]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.281000 audit[4143]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe016b3f60 a2=0 a3=7ffe016b3f4c items=0 ppid=4040 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:19:32.287000 audit[4145]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.287000 audit[4145]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdbc97f4c0 a2=0 a3=7ffdbc97f4ac items=0 ppid=4040 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:19:32.289000 audit[4147]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.289000 audit[4147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc199888a0 a2=0 a3=7ffc1998888c items=0 ppid=4040 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:19:32.292000 audit[4155]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.292000 audit[4155]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdade38160 a2=0 a3=7ffdade3814c items=0 ppid=4040 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:19:32.297000 audit[4158]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.297000 audit[4158]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff0bf7b450 a2=0 a3=7fff0bf7b43c items=0 ppid=4040 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:19:32.298000 audit[4159]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.298000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe68a12010 a2=0 a3=7ffe68a11ffc items=0 ppid=4040 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:19:32.301000 audit[4161]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.301000 audit[4161]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe63f61b70 a2=0 a3=7ffe63f61b5c items=0 ppid=4040 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:19:32.303295 containerd[2417]: time="2026-01-14T01:19:32.303144999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mvsz,Uid:4b2f9d0c-661c-4d82-8e71-29100e2abc13,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84\"" Jan 14 01:19:32.302000 audit[4162]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.302000 audit[4162]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefc3457d0 a2=0 a3=7ffefc3457bc items=0 ppid=4040 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:19:32.307180 containerd[2417]: time="2026-01-14T01:19:32.307077934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:19:32.306000 audit[4164]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.306000 audit[4164]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbf8e4e70 a2=0 a3=7ffcbf8e4e5c items=0 ppid=4040 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:19:32.309000 audit[4167]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.309000 audit[4167]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc73b38ac0 a2=0 a3=7ffc73b38aac items=0 ppid=4040 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:19:32.313000 audit[4170]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.313000 audit[4170]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdea15ca90 a2=0 a3=7ffdea15ca7c items=0 ppid=4040 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:19:32.314000 audit[4171]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.314000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc83dd20d0 a2=0 a3=7ffc83dd20bc items=0 ppid=4040 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:19:32.316000 audit[4173]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.316000 audit[4173]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff5236ffb0 a2=0 a3=7fff5236ff9c items=0 ppid=4040 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:19:32.319000 audit[4176]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.319000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb6b85b90 a2=0 a3=7ffdb6b85b7c items=0 ppid=4040 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:19:32.320000 audit[4177]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.320000 audit[4177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddaa5d2f0 a2=0 a3=7ffddaa5d2dc items=0 ppid=4040 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:19:32.322000 audit[4179]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:19:32.322000 audit[4179]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffce6a1ad80 a2=0 a3=7ffce6a1ad6c items=0 ppid=4040 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:19:32.462000 audit[4185]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:32.462000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe795d53f0 a2=0 a3=7ffe795d53dc items=0 ppid=4040 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:32.472000 audit[4185]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:32.472000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe795d53f0 a2=0 a3=7ffe795d53dc items=0 ppid=4040 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:32.473000 audit[4190]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.473000 audit[4190]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffccbdc0ba0 a2=0 a3=7ffccbdc0b8c items=0 ppid=4040 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:19:32.476000 audit[4192]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.476000 audit[4192]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcc5fe1100 a2=0 a3=7ffcc5fe10ec items=0 ppid=4040 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:19:32.480000 audit[4195]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.480000 audit[4195]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc669bac20 a2=0 a3=7ffc669bac0c items=0 ppid=4040 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:19:32.481000 audit[4196]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.481000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff388f2a30 a2=0 a3=7fff388f2a1c items=0 ppid=4040 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:19:32.483000 audit[4198]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.483000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeeac709a0 a2=0 a3=7ffeeac7098c items=0 ppid=4040 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:19:32.484000 audit[4199]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.484000 audit[4199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1c8f4320 a2=0 a3=7ffd1c8f430c items=0 ppid=4040 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:19:32.486000 audit[4201]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.486000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffebebab580 a2=0 a3=7ffebebab56c items=0 ppid=4040 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:19:32.489000 audit[4204]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.489000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc49c7cce0 a2=0 a3=7ffc49c7cccc items=0 ppid=4040 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:19:32.490000 audit[4205]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.490000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5c7b50d0 a2=0 a3=7ffd5c7b50bc items=0 ppid=4040 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:19:32.493000 audit[4207]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.493000 audit[4207]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd84feeca0 a2=0 a3=7ffd84feec8c items=0 ppid=4040 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:19:32.494000 audit[4208]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.494000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde831e2f0 a2=0 a3=7ffde831e2dc items=0 ppid=4040 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:19:32.496000 audit[4210]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.496000 audit[4210]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd8a9dd80 a2=0 a3=7ffdd8a9dd6c items=0 ppid=4040 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:19:32.499000 audit[4213]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.499000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff82e343a0 a2=0 a3=7fff82e3438c items=0 ppid=4040 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:19:32.502000 audit[4216]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.502000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc4ed06f0 a2=0 a3=7ffdc4ed06dc items=0 ppid=4040 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:19:32.503000 audit[4217]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.503000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa0a8f520 a2=0 a3=7fffa0a8f50c items=0 ppid=4040 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:19:32.505000 audit[4219]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.505000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe42c937d0 a2=0 a3=7ffe42c937bc items=0 ppid=4040 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:19:32.509000 audit[4222]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.509000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef68f2c40 a2=0 a3=7ffef68f2c2c items=0 ppid=4040 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:19:32.510000 audit[4223]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.510000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff784843f0 a2=0 a3=7fff784843dc items=0 ppid=4040 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:19:32.512000 audit[4225]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.512000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff8726ddf0 a2=0 a3=7fff8726dddc items=0 ppid=4040 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:19:32.513000 audit[4226]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.513000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe14f91b0 a2=0 a3=7fffe14f919c items=0 ppid=4040 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:19:32.515000 audit[4228]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.515000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6edf8b70 a2=0 a3=7ffc6edf8b5c items=0 ppid=4040 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:19:32.518000 audit[4231]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:19:32.518000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff59d288f0 a2=0 a3=7fff59d288dc items=0 ppid=4040 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:19:32.521000 audit[4233]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:19:32.521000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcc84afd50 a2=0 a3=7ffcc84afd3c items=0 ppid=4040 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.521000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:32.521000 audit[4233]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:19:32.521000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcc84afd50 a2=0 a3=7ffcc84afd3c items=0 ppid=4040 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:32.521000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:34.254299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496470923.mount: Deactivated successfully. Jan 14 01:19:34.400237 kubelet[3936]: I0114 01:19:34.399500 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8qhx" podStartSLOduration=3.39948195 podStartE2EDuration="3.39948195s" podCreationTimestamp="2026-01-14 01:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:19:32.105313775 +0000 UTC m=+5.136244975" watchObservedRunningTime="2026-01-14 01:19:34.39948195 +0000 UTC m=+7.430413190" Jan 14 01:19:34.794326 containerd[2417]: time="2026-01-14T01:19:34.794283672Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:34.797287 containerd[2417]: time="2026-01-14T01:19:34.797250589Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:19:34.800353 containerd[2417]: time="2026-01-14T01:19:34.800312648Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:34.804183 containerd[2417]: time="2026-01-14T01:19:34.804138719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:34.804777 containerd[2417]: time="2026-01-14T01:19:34.804755244Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.497418603s" Jan 14 01:19:34.804822 containerd[2417]: time="2026-01-14T01:19:34.804778263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:19:34.806764 containerd[2417]: time="2026-01-14T01:19:34.806729163Z" level=info msg="CreateContainer within sandbox \"ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:19:34.840156 containerd[2417]: time="2026-01-14T01:19:34.840131329Z" level=info msg="Container 63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:34.856647 containerd[2417]: time="2026-01-14T01:19:34.856608726Z" level=info msg="CreateContainer within sandbox \"ce86c30f420ce7730c32b02dc31b6e36e0e7775fd7ff57b9c8e7d62c212fed84\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9\"" Jan 14 01:19:34.857127 containerd[2417]: time="2026-01-14T01:19:34.857103922Z" level=info msg="StartContainer for \"63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9\"" Jan 14 01:19:34.858112 containerd[2417]: time="2026-01-14T01:19:34.858054967Z" level=info msg="connecting to shim 63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9" address="unix:///run/containerd/s/5f1b4c7056a28f8dc6606e251641ba04435c3b86acd4007a52c798f8047e1639" protocol=ttrpc version=3 Jan 14 01:19:34.878836 systemd[1]: Started cri-containerd-63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9.scope - libcontainer container 63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9. Jan 14 01:19:34.885000 audit: BPF prog-id=170 op=LOAD Jan 14 01:19:34.886000 audit: BPF prog-id=171 op=LOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=172 op=LOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=173 op=LOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.886000 audit: BPF prog-id=174 op=LOAD Jan 14 01:19:34.886000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4107 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:34.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633656466336133316239613633313939633363316563636230393130 Jan 14 01:19:34.907342 containerd[2417]: time="2026-01-14T01:19:34.907311715Z" level=info msg="StartContainer for \"63edf3a31b9a63199c3c1eccb09108c92da31bbb3e919032d053d6437c2035c9\" returns successfully" Jan 14 01:19:36.234176 kubelet[3936]: I0114 01:19:36.233995 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5mvsz" podStartSLOduration=2.734278652 podStartE2EDuration="5.233976854s" podCreationTimestamp="2026-01-14 01:19:31 +0000 UTC" firstStartedPulling="2026-01-14 01:19:32.305782385 +0000 UTC m=+5.336713580" lastFinishedPulling="2026-01-14 01:19:34.805480589 +0000 UTC m=+7.836411782" observedRunningTime="2026-01-14 01:19:35.125600325 +0000 UTC m=+8.156531524" watchObservedRunningTime="2026-01-14 01:19:36.233976854 +0000 UTC m=+9.264908096" Jan 14 01:19:40.674240 sudo[2880]: pam_unix(sudo:session): session closed for user root Jan 14 01:19:40.684335 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:19:40.684421 kernel: audit: type=1106 audit(1768353580.674:545): pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:19:40.674000 audit[2880]: USER_END pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:19:40.674000 audit[2880]: CRED_DISP pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:19:40.692657 kernel: audit: type=1104 audit(1768353580.674:546): pid=2880 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:19:40.784040 sshd[2879]: Connection closed by 10.200.16.10 port 57342 Jan 14 01:19:40.784788 sshd-session[2875]: pam_unix(sshd:session): session closed for user core Jan 14 01:19:40.785000 audit[2875]: USER_END pid=2875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:19:40.789917 systemd[1]: sshd@6-10.200.4.14:22-10.200.16.10:57342.service: Deactivated successfully. Jan 14 01:19:40.793002 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:19:40.793509 systemd[1]: session-10.scope: Consumed 3.332s CPU time, 228M memory peak. Jan 14 01:19:40.795655 systemd-logind[2405]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:19:40.797764 kernel: audit: type=1106 audit(1768353580.785:547): pid=2875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:19:40.799133 systemd-logind[2405]: Removed session 10. Jan 14 01:19:40.785000 audit[2875]: CRED_DISP pid=2875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:19:40.808693 kernel: audit: type=1104 audit(1768353580.785:548): pid=2875 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:19:40.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.14:22-10.200.16.10:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:40.817683 kernel: audit: type=1131 audit(1768353580.788:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.14:22-10.200.16.10:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:41.617000 audit[4323]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.623718 kernel: audit: type=1325 audit(1768353581.617:550): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.617000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff267cd0a0 a2=0 a3=7fff267cd08c items=0 ppid=4040 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.635709 kernel: audit: type=1300 audit(1768353581.617:550): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff267cd0a0 a2=0 a3=7fff267cd08c items=0 ppid=4040 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:41.645671 kernel: audit: type=1327 audit(1768353581.617:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:41.649667 kernel: audit: type=1325 audit(1768353581.635:551): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.635000 audit[4323]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.635000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff267cd0a0 a2=0 a3=0 items=0 ppid=4040 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.656655 kernel: audit: type=1300 audit(1768353581.635:551): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff267cd0a0 a2=0 a3=0 items=0 ppid=4040 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:41.687000 audit[4325]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.687000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffec6e7e220 a2=0 a3=7ffec6e7e20c items=0 ppid=4040 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:41.692000 audit[4325]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:41.692000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec6e7e220 a2=0 a3=0 items=0 ppid=4040 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:41.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:44.872000 audit[4327]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:44.872000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd1a2a4f0 a2=0 a3=7ffdd1a2a4dc items=0 ppid=4040 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:44.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:44.877000 audit[4327]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:44.877000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd1a2a4f0 a2=0 a3=0 items=0 ppid=4040 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:44.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:44.890000 audit[4329]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:44.890000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe2e86a2a0 a2=0 a3=7ffe2e86a28c items=0 ppid=4040 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:44.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:44.896000 audit[4329]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:44.896000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2e86a2a0 a2=0 a3=0 items=0 ppid=4040 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:44.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:45.912606 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:19:45.912738 kernel: audit: type=1325 audit(1768353585.907:558): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:45.907000 audit[4331]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:45.907000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfec75a30 a2=0 a3=7ffdfec75a1c items=0 ppid=4040 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:45.919998 kernel: audit: type=1300 audit(1768353585.907:558): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfec75a30 a2=0 a3=7ffdfec75a1c items=0 ppid=4040 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:45.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:45.923657 kernel: audit: type=1327 audit(1768353585.907:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:45.920000 audit[4331]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:45.935928 kernel: audit: type=1325 audit(1768353585.920:559): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:45.935995 kernel: audit: type=1300 audit(1768353585.920:559): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfec75a30 a2=0 a3=0 items=0 ppid=4040 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:45.920000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfec75a30 a2=0 a3=0 items=0 ppid=4040 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:45.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:45.941660 kernel: audit: type=1327 audit(1768353585.920:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:46.460742 systemd[1]: Created slice kubepods-besteffort-pod3531c67a_b52a_4f59_80f2_d96b5576cf4a.slice - libcontainer container kubepods-besteffort-pod3531c67a_b52a_4f59_80f2_d96b5576cf4a.slice. Jan 14 01:19:46.481757 kubelet[3936]: I0114 01:19:46.481618 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxn4\" (UniqueName: \"kubernetes.io/projected/3531c67a-b52a-4f59-80f2-d96b5576cf4a-kube-api-access-8sxn4\") pod \"calico-typha-6d49d64cdb-9bpqc\" (UID: \"3531c67a-b52a-4f59-80f2-d96b5576cf4a\") " pod="calico-system/calico-typha-6d49d64cdb-9bpqc" Jan 14 01:19:46.481757 kubelet[3936]: I0114 01:19:46.481680 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3531c67a-b52a-4f59-80f2-d96b5576cf4a-tigera-ca-bundle\") pod \"calico-typha-6d49d64cdb-9bpqc\" (UID: \"3531c67a-b52a-4f59-80f2-d96b5576cf4a\") " pod="calico-system/calico-typha-6d49d64cdb-9bpqc" Jan 14 01:19:46.481757 kubelet[3936]: I0114 01:19:46.481700 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3531c67a-b52a-4f59-80f2-d96b5576cf4a-typha-certs\") pod \"calico-typha-6d49d64cdb-9bpqc\" (UID: \"3531c67a-b52a-4f59-80f2-d96b5576cf4a\") " pod="calico-system/calico-typha-6d49d64cdb-9bpqc" Jan 14 01:19:46.645096 systemd[1]: Created slice kubepods-besteffort-podcb50369f_38c1_4815_b8c4_f3fd18a9a5db.slice - libcontainer container kubepods-besteffort-podcb50369f_38c1_4815_b8c4_f3fd18a9a5db.slice. Jan 14 01:19:46.682655 kubelet[3936]: I0114 01:19:46.682434 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-lib-modules\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682655 kubelet[3936]: I0114 01:19:46.682466 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchqm\" (UniqueName: \"kubernetes.io/projected/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-kube-api-access-xchqm\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682655 kubelet[3936]: I0114 01:19:46.682488 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-cni-bin-dir\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682655 kubelet[3936]: I0114 01:19:46.682502 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-flexvol-driver-host\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682655 kubelet[3936]: I0114 01:19:46.682515 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-node-certs\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682831 kubelet[3936]: I0114 01:19:46.682529 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-policysync\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682831 kubelet[3936]: I0114 01:19:46.682546 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-var-run-calico\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682831 kubelet[3936]: I0114 01:19:46.682575 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-cni-net-dir\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682831 kubelet[3936]: I0114 01:19:46.682604 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-var-lib-calico\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682831 kubelet[3936]: I0114 01:19:46.682667 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-tigera-ca-bundle\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682912 kubelet[3936]: I0114 01:19:46.682683 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-xtables-lock\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.682912 kubelet[3936]: I0114 01:19:46.682699 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cb50369f-38c1-4815-b8c4-f3fd18a9a5db-cni-log-dir\") pod \"calico-node-n7m55\" (UID: \"cb50369f-38c1-4815-b8c4-f3fd18a9a5db\") " pod="calico-system/calico-node-n7m55" Jan 14 01:19:46.767327 containerd[2417]: time="2026-01-14T01:19:46.767279470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d49d64cdb-9bpqc,Uid:3531c67a-b52a-4f59-80f2-d96b5576cf4a,Namespace:calico-system,Attempt:0,}" Jan 14 01:19:46.796551 kubelet[3936]: E0114 01:19:46.796269 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.796551 kubelet[3936]: W0114 01:19:46.796289 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.796551 kubelet[3936]: E0114 01:19:46.796329 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.801155 kubelet[3936]: E0114 01:19:46.801136 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.801436 kubelet[3936]: W0114 01:19:46.801365 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.801613 kubelet[3936]: E0114 01:19:46.801602 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.810479 kubelet[3936]: E0114 01:19:46.810460 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.810479 kubelet[3936]: W0114 01:19:46.810478 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.810599 kubelet[3936]: E0114 01:19:46.810495 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.838086 containerd[2417]: time="2026-01-14T01:19:46.838020835Z" level=info msg="connecting to shim e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b" address="unix:///run/containerd/s/09bf0a690cdc88bf13d1850a95ccb27f01519a9fe92ddb5f3aa42a11271aa200" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:46.842730 kubelet[3936]: E0114 01:19:46.842609 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:46.866505 kubelet[3936]: E0114 01:19:46.865172 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.868288 kubelet[3936]: W0114 01:19:46.868182 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.868288 kubelet[3936]: E0114 01:19:46.868226 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.868590 kubelet[3936]: E0114 01:19:46.868560 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.868590 kubelet[3936]: W0114 01:19:46.868572 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.868804 kubelet[3936]: E0114 01:19:46.868658 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.869153 kubelet[3936]: E0114 01:19:46.869096 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.869153 kubelet[3936]: W0114 01:19:46.869120 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.869153 kubelet[3936]: E0114 01:19:46.869133 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.869571 kubelet[3936]: E0114 01:19:46.869506 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.869571 kubelet[3936]: W0114 01:19:46.869517 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.869571 kubelet[3936]: E0114 01:19:46.869527 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.869815 systemd[1]: Started cri-containerd-e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b.scope - libcontainer container e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b. Jan 14 01:19:46.870820 kubelet[3936]: E0114 01:19:46.870808 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.871249 kubelet[3936]: W0114 01:19:46.871024 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.871249 kubelet[3936]: E0114 01:19:46.871045 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.872006 kubelet[3936]: E0114 01:19:46.871908 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.872006 kubelet[3936]: W0114 01:19:46.871921 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.872006 kubelet[3936]: E0114 01:19:46.871934 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.872257 kubelet[3936]: E0114 01:19:46.872248 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.872325 kubelet[3936]: W0114 01:19:46.872317 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.872369 kubelet[3936]: E0114 01:19:46.872362 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.872582 kubelet[3936]: E0114 01:19:46.872574 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.872718 kubelet[3936]: W0114 01:19:46.872627 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.872718 kubelet[3936]: E0114 01:19:46.872649 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.872868 kubelet[3936]: E0114 01:19:46.872831 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.872868 kubelet[3936]: W0114 01:19:46.872838 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.872921 kubelet[3936]: E0114 01:19:46.872847 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.873242 kubelet[3936]: E0114 01:19:46.873158 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.873242 kubelet[3936]: W0114 01:19:46.873167 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.873242 kubelet[3936]: E0114 01:19:46.873176 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.873407 kubelet[3936]: E0114 01:19:46.873397 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.873407 kubelet[3936]: W0114 01:19:46.873408 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.874253 kubelet[3936]: E0114 01:19:46.874232 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.875373 kubelet[3936]: E0114 01:19:46.875359 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.875373 kubelet[3936]: W0114 01:19:46.875373 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.875564 kubelet[3936]: E0114 01:19:46.875386 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.875592 kubelet[3936]: E0114 01:19:46.875569 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.875592 kubelet[3936]: W0114 01:19:46.875576 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.875592 kubelet[3936]: E0114 01:19:46.875585 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.875822 kubelet[3936]: E0114 01:19:46.875741 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.875822 kubelet[3936]: W0114 01:19:46.875747 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.875822 kubelet[3936]: E0114 01:19:46.875755 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876048 kubelet[3936]: E0114 01:19:46.876037 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876091 kubelet[3936]: W0114 01:19:46.876049 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876091 kubelet[3936]: E0114 01:19:46.876059 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876192 kubelet[3936]: E0114 01:19:46.876183 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876192 kubelet[3936]: W0114 01:19:46.876192 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876199 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876343 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876775 kubelet[3936]: W0114 01:19:46.876348 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876356 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876456 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876775 kubelet[3936]: W0114 01:19:46.876471 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876478 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876573 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876775 kubelet[3936]: W0114 01:19:46.876577 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876775 kubelet[3936]: E0114 01:19:46.876584 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.876935 kubelet[3936]: E0114 01:19:46.876680 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.876935 kubelet[3936]: W0114 01:19:46.876685 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.876935 kubelet[3936]: E0114 01:19:46.876691 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.884357 kubelet[3936]: E0114 01:19:46.884344 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.884503 kubelet[3936]: W0114 01:19:46.884411 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.884503 kubelet[3936]: E0114 01:19:46.884426 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.884691 kubelet[3936]: I0114 01:19:46.884591 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ad549b6-0df1-4bac-8f3a-1bc2943edac4-varrun\") pod \"csi-node-driver-bg7tj\" (UID: \"0ad549b6-0df1-4bac-8f3a-1bc2943edac4\") " pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:19:46.884955 kubelet[3936]: E0114 01:19:46.884929 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.884955 kubelet[3936]: W0114 01:19:46.884941 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.885133 kubelet[3936]: E0114 01:19:46.885074 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.885133 kubelet[3936]: I0114 01:19:46.885097 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad549b6-0df1-4bac-8f3a-1bc2943edac4-kubelet-dir\") pod \"csi-node-driver-bg7tj\" (UID: \"0ad549b6-0df1-4bac-8f3a-1bc2943edac4\") " pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:19:46.885342 kubelet[3936]: E0114 01:19:46.885335 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.885421 kubelet[3936]: W0114 01:19:46.885413 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.885511 kubelet[3936]: E0114 01:19:46.885472 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.885751 kubelet[3936]: E0114 01:19:46.885743 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.885821 kubelet[3936]: W0114 01:19:46.885789 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.885000 audit: BPF prog-id=175 op=LOAD Jan 14 01:19:46.885993 kubelet[3936]: E0114 01:19:46.885881 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.886891 kubelet[3936]: E0114 01:19:46.886812 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.886975 kubelet[3936]: W0114 01:19:46.886946 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.887185 kubelet[3936]: E0114 01:19:46.887108 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.887439 kubelet[3936]: E0114 01:19:46.887417 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.887439 kubelet[3936]: W0114 01:19:46.887427 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.887651 kernel: audit: type=1334 audit(1768353586.885:560): prog-id=175 op=LOAD Jan 14 01:19:46.887753 kubelet[3936]: E0114 01:19:46.887723 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.887992 kubelet[3936]: E0114 01:19:46.887976 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.888078 kubelet[3936]: W0114 01:19:46.888051 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.888078 kubelet[3936]: E0114 01:19:46.888066 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.888213 kubelet[3936]: I0114 01:19:46.888134 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ad549b6-0df1-4bac-8f3a-1bc2943edac4-registration-dir\") pod \"csi-node-driver-bg7tj\" (UID: \"0ad549b6-0df1-4bac-8f3a-1bc2943edac4\") " pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:19:46.888435 kubelet[3936]: E0114 01:19:46.888411 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.888435 kubelet[3936]: W0114 01:19:46.888423 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.888570 kubelet[3936]: E0114 01:19:46.888497 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.888570 kubelet[3936]: I0114 01:19:46.888515 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvs2\" (UniqueName: \"kubernetes.io/projected/0ad549b6-0df1-4bac-8f3a-1bc2943edac4-kube-api-access-zxvs2\") pod \"csi-node-driver-bg7tj\" (UID: \"0ad549b6-0df1-4bac-8f3a-1bc2943edac4\") " pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:19:46.887000 audit: BPF prog-id=176 op=LOAD Jan 14 01:19:46.889661 kubelet[3936]: E0114 01:19:46.889613 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.889819 kubelet[3936]: W0114 01:19:46.889631 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.889819 kubelet[3936]: E0114 01:19:46.889754 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.889819 kubelet[3936]: I0114 01:19:46.889773 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ad549b6-0df1-4bac-8f3a-1bc2943edac4-socket-dir\") pod \"csi-node-driver-bg7tj\" (UID: \"0ad549b6-0df1-4bac-8f3a-1bc2943edac4\") " pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:19:46.890083 kubelet[3936]: E0114 01:19:46.890061 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.890083 kubelet[3936]: W0114 01:19:46.890071 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.890284 kubelet[3936]: E0114 01:19:46.890266 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.890646 kernel: audit: type=1334 audit(1768353586.887:561): prog-id=176 op=LOAD Jan 14 01:19:46.890707 kubelet[3936]: E0114 01:19:46.890620 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.890842 kubelet[3936]: W0114 01:19:46.890778 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.890964 kubelet[3936]: E0114 01:19:46.890948 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.891244 kubelet[3936]: E0114 01:19:46.891234 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.891325 kubelet[3936]: W0114 01:19:46.891315 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.892445 kubelet[3936]: E0114 01:19:46.892372 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.892910 kubelet[3936]: E0114 01:19:46.892839 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.893310 kubelet[3936]: W0114 01:19:46.893164 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.893310 kubelet[3936]: E0114 01:19:46.893234 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.894624 kubelet[3936]: E0114 01:19:46.894455 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.894624 kubelet[3936]: W0114 01:19:46.894550 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.894624 kubelet[3936]: E0114 01:19:46.894613 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.895498 kubelet[3936]: E0114 01:19:46.895402 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.895498 kubelet[3936]: W0114 01:19:46.895423 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.895498 kubelet[3936]: E0114 01:19:46.895435 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.897934 kernel: audit: type=1300 audit(1768353586.887:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.898104 kernel: audit: type=1327 audit(1768353586.887:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=177 op=LOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=178 op=LOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.887000 audit: BPF prog-id=179 op=LOAD Jan 14 01:19:46.887000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4349 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386430346661626437383237333334656232343232613134636465 Jan 14 01:19:46.948168 containerd[2417]: time="2026-01-14T01:19:46.948117070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d49d64cdb-9bpqc,Uid:3531c67a-b52a-4f59-80f2-d96b5576cf4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b\"" Jan 14 01:19:46.948758 containerd[2417]: time="2026-01-14T01:19:46.948733548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7m55,Uid:cb50369f-38c1-4815-b8c4-f3fd18a9a5db,Namespace:calico-system,Attempt:0,}" Jan 14 01:19:46.950238 containerd[2417]: time="2026-01-14T01:19:46.950216985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:19:46.952000 audit[4436]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:46.952000 audit[4436]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb1b5b5c0 a2=0 a3=7ffeb1b5b5ac items=0 ppid=4040 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:46.956000 audit[4436]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:46.956000 audit[4436]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb1b5b5c0 a2=0 a3=0 items=0 ppid=4040 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:46.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:46.990075 kubelet[3936]: E0114 01:19:46.990056 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.990075 kubelet[3936]: W0114 01:19:46.990070 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.990222 kubelet[3936]: E0114 01:19:46.990089 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.990347 kubelet[3936]: E0114 01:19:46.990332 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.990347 kubelet[3936]: W0114 01:19:46.990344 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.990420 kubelet[3936]: E0114 01:19:46.990364 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.990537 kubelet[3936]: E0114 01:19:46.990529 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.990581 kubelet[3936]: W0114 01:19:46.990569 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.990624 kubelet[3936]: E0114 01:19:46.990586 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.990838 kubelet[3936]: E0114 01:19:46.990825 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.990838 kubelet[3936]: W0114 01:19:46.990837 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.990907 kubelet[3936]: E0114 01:19:46.990855 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991000 kubelet[3936]: E0114 01:19:46.990988 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991000 kubelet[3936]: W0114 01:19:46.990997 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991061 kubelet[3936]: E0114 01:19:46.991010 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991155 kubelet[3936]: E0114 01:19:46.991144 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991155 kubelet[3936]: W0114 01:19:46.991153 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991216 kubelet[3936]: E0114 01:19:46.991166 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991299 kubelet[3936]: E0114 01:19:46.991288 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991299 kubelet[3936]: W0114 01:19:46.991297 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991350 kubelet[3936]: E0114 01:19:46.991308 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991487 kubelet[3936]: E0114 01:19:46.991476 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991487 kubelet[3936]: W0114 01:19:46.991484 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991564 kubelet[3936]: E0114 01:19:46.991494 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991624 kubelet[3936]: E0114 01:19:46.991603 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991624 kubelet[3936]: W0114 01:19:46.991609 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991624 kubelet[3936]: E0114 01:19:46.991620 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991743 kubelet[3936]: E0114 01:19:46.991736 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991743 kubelet[3936]: W0114 01:19:46.991741 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991827 kubelet[3936]: E0114 01:19:46.991752 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.991977 kubelet[3936]: E0114 01:19:46.991862 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991977 kubelet[3936]: W0114 01:19:46.991868 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.991977 kubelet[3936]: E0114 01:19:46.991955 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.991977 kubelet[3936]: W0114 01:19:46.991960 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992069 kubelet[3936]: E0114 01:19:46.992039 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992069 kubelet[3936]: W0114 01:19:46.992046 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992129 kubelet[3936]: E0114 01:19:46.992122 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992129 kubelet[3936]: W0114 01:19:46.992128 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992173 kubelet[3936]: E0114 01:19:46.992135 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992312 kubelet[3936]: E0114 01:19:46.992213 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992312 kubelet[3936]: E0114 01:19:46.992214 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992312 kubelet[3936]: W0114 01:19:46.992219 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992312 kubelet[3936]: E0114 01:19:46.992223 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992312 kubelet[3936]: E0114 01:19:46.992233 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992312 kubelet[3936]: E0114 01:19:46.992263 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992532 kubelet[3936]: E0114 01:19:46.992358 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992532 kubelet[3936]: W0114 01:19:46.992364 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992532 kubelet[3936]: E0114 01:19:46.992371 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992532 kubelet[3936]: E0114 01:19:46.992492 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992532 kubelet[3936]: W0114 01:19:46.992498 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992695 kubelet[3936]: E0114 01:19:46.992590 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992695 kubelet[3936]: W0114 01:19:46.992595 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992695 kubelet[3936]: E0114 01:19:46.992601 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992695 kubelet[3936]: E0114 01:19:46.992505 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992807 kubelet[3936]: E0114 01:19:46.992723 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992807 kubelet[3936]: W0114 01:19:46.992729 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992807 kubelet[3936]: E0114 01:19:46.992735 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992890 kubelet[3936]: E0114 01:19:46.992847 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.992890 kubelet[3936]: W0114 01:19:46.992852 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.992890 kubelet[3936]: E0114 01:19:46.992863 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.992988 kubelet[3936]: E0114 01:19:46.992985 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.993027 kubelet[3936]: W0114 01:19:46.992990 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.993027 kubelet[3936]: E0114 01:19:46.993003 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.993113 kubelet[3936]: E0114 01:19:46.993102 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.993113 kubelet[3936]: W0114 01:19:46.993110 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.993186 kubelet[3936]: E0114 01:19:46.993120 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.993253 kubelet[3936]: E0114 01:19:46.993204 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.993253 kubelet[3936]: W0114 01:19:46.993209 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.993253 kubelet[3936]: E0114 01:19:46.993223 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.993422 kubelet[3936]: E0114 01:19:46.993405 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.993422 kubelet[3936]: W0114 01:19:46.993418 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.993471 kubelet[3936]: E0114 01:19:46.993435 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:46.993651 kubelet[3936]: E0114 01:19:46.993613 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:46.993651 kubelet[3936]: W0114 01:19:46.993622 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:46.994011 kubelet[3936]: E0114 01:19:46.993780 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:47.000080 kubelet[3936]: E0114 01:19:47.000062 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:47.000080 kubelet[3936]: W0114 01:19:47.000076 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:47.000170 kubelet[3936]: E0114 01:19:47.000089 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:47.010997 containerd[2417]: time="2026-01-14T01:19:47.010549868Z" level=info msg="connecting to shim 424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89" address="unix:///run/containerd/s/8644691cf6828f2645130f0db5b0b29988bb4e39d9fdff5270490c0914bd84cb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:19:47.028794 systemd[1]: Started cri-containerd-424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89.scope - libcontainer container 424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89. Jan 14 01:19:47.039000 audit: BPF prog-id=180 op=LOAD Jan 14 01:19:47.040000 audit: BPF prog-id=181 op=LOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=182 op=LOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=183 op=LOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.040000 audit: BPF prog-id=184 op=LOAD Jan 14 01:19:47.040000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4472 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:47.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346536653336363963393462653933313837363731666562623134 Jan 14 01:19:47.054503 containerd[2417]: time="2026-01-14T01:19:47.054483420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7m55,Uid:cb50369f-38c1-4815-b8c4-f3fd18a9a5db,Namespace:calico-system,Attempt:0,} returns sandbox id \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\"" Jan 14 01:19:48.315167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961075496.mount: Deactivated successfully. Jan 14 01:19:49.049237 kubelet[3936]: E0114 01:19:49.049197 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:49.365768 containerd[2417]: time="2026-01-14T01:19:49.365674380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:49.368927 containerd[2417]: time="2026-01-14T01:19:49.368800619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736634" Jan 14 01:19:49.372246 containerd[2417]: time="2026-01-14T01:19:49.372223493Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:49.377454 containerd[2417]: time="2026-01-14T01:19:49.377408675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:49.377832 containerd[2417]: time="2026-01-14T01:19:49.377733650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.427485982s" Jan 14 01:19:49.377832 containerd[2417]: time="2026-01-14T01:19:49.377760460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:19:49.379457 containerd[2417]: time="2026-01-14T01:19:49.379249547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:19:49.391911 containerd[2417]: time="2026-01-14T01:19:49.391827286Z" level=info msg="CreateContainer within sandbox \"e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:19:49.416183 containerd[2417]: time="2026-01-14T01:19:49.416158226Z" level=info msg="Container fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:49.435212 containerd[2417]: time="2026-01-14T01:19:49.435169421Z" level=info msg="CreateContainer within sandbox \"e08d04fabd7827334eb2422a14cded6922b4317890cb6abeadd5ca32afe9ae8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c\"" Jan 14 01:19:49.435989 containerd[2417]: time="2026-01-14T01:19:49.435877422Z" level=info msg="StartContainer for \"fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c\"" Jan 14 01:19:49.437914 containerd[2417]: time="2026-01-14T01:19:49.437864254Z" level=info msg="connecting to shim fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c" address="unix:///run/containerd/s/09bf0a690cdc88bf13d1850a95ccb27f01519a9fe92ddb5f3aa42a11271aa200" protocol=ttrpc version=3 Jan 14 01:19:49.464801 systemd[1]: Started cri-containerd-fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c.scope - libcontainer container fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c. Jan 14 01:19:49.473000 audit: BPF prog-id=185 op=LOAD Jan 14 01:19:49.473000 audit: BPF prog-id=186 op=LOAD Jan 14 01:19:49.473000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=187 op=LOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=188 op=LOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.474000 audit: BPF prog-id=189 op=LOAD Jan 14 01:19:49.474000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4349 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:49.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665366333633761343830316231393465303037306238373030373766 Jan 14 01:19:49.511095 containerd[2417]: time="2026-01-14T01:19:49.511048016Z" level=info msg="StartContainer for \"fe6c3c7a4801b194e0070b870077f3b35364c1a1f54b53c8c81bdfe9e108797c\" returns successfully" Jan 14 01:19:50.142180 kubelet[3936]: I0114 01:19:50.141845 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d49d64cdb-9bpqc" podStartSLOduration=1.713136598 podStartE2EDuration="4.141829239s" podCreationTimestamp="2026-01-14 01:19:46 +0000 UTC" firstStartedPulling="2026-01-14 01:19:46.949782463 +0000 UTC m=+19.980713645" lastFinishedPulling="2026-01-14 01:19:49.378475094 +0000 UTC m=+22.409406286" observedRunningTime="2026-01-14 01:19:50.141056724 +0000 UTC m=+23.171987922" watchObservedRunningTime="2026-01-14 01:19:50.141829239 +0000 UTC m=+23.172760439" Jan 14 01:19:50.198434 kubelet[3936]: E0114 01:19:50.198397 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.198434 kubelet[3936]: W0114 01:19:50.198414 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.198434 kubelet[3936]: E0114 01:19:50.198431 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.198628 kubelet[3936]: E0114 01:19:50.198542 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.198628 kubelet[3936]: W0114 01:19:50.198548 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.198628 kubelet[3936]: E0114 01:19:50.198555 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.198742 kubelet[3936]: E0114 01:19:50.198704 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.198742 kubelet[3936]: W0114 01:19:50.198710 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.198742 kubelet[3936]: E0114 01:19:50.198718 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.198891 kubelet[3936]: E0114 01:19:50.198861 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.198891 kubelet[3936]: W0114 01:19:50.198884 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.198954 kubelet[3936]: E0114 01:19:50.198891 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199006 kubelet[3936]: E0114 01:19:50.198988 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199006 kubelet[3936]: W0114 01:19:50.198995 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199006 kubelet[3936]: E0114 01:19:50.199001 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199109 kubelet[3936]: E0114 01:19:50.199083 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199109 kubelet[3936]: W0114 01:19:50.199088 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199109 kubelet[3936]: E0114 01:19:50.199094 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199202 kubelet[3936]: E0114 01:19:50.199177 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199202 kubelet[3936]: W0114 01:19:50.199182 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199202 kubelet[3936]: E0114 01:19:50.199196 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199300 kubelet[3936]: E0114 01:19:50.199285 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199300 kubelet[3936]: W0114 01:19:50.199290 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199300 kubelet[3936]: E0114 01:19:50.199296 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199412 kubelet[3936]: E0114 01:19:50.199404 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199412 kubelet[3936]: W0114 01:19:50.199411 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199479 kubelet[3936]: E0114 01:19:50.199417 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199508 kubelet[3936]: E0114 01:19:50.199496 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199508 kubelet[3936]: W0114 01:19:50.199500 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199508 kubelet[3936]: E0114 01:19:50.199506 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199601 kubelet[3936]: E0114 01:19:50.199588 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199601 kubelet[3936]: W0114 01:19:50.199592 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199601 kubelet[3936]: E0114 01:19:50.199598 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199720 kubelet[3936]: E0114 01:19:50.199692 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199720 kubelet[3936]: W0114 01:19:50.199697 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199769 kubelet[3936]: E0114 01:19:50.199720 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199824 kubelet[3936]: E0114 01:19:50.199816 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199850 kubelet[3936]: W0114 01:19:50.199824 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199850 kubelet[3936]: E0114 01:19:50.199830 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.199949 kubelet[3936]: E0114 01:19:50.199921 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.199949 kubelet[3936]: W0114 01:19:50.199942 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.199949 kubelet[3936]: E0114 01:19:50.199948 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.200033 kubelet[3936]: E0114 01:19:50.200028 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.200059 kubelet[3936]: W0114 01:19:50.200034 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.200059 kubelet[3936]: E0114 01:19:50.200040 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.214366 kubelet[3936]: E0114 01:19:50.214347 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.214366 kubelet[3936]: W0114 01:19:50.214361 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.214515 kubelet[3936]: E0114 01:19:50.214382 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.214554 kubelet[3936]: E0114 01:19:50.214517 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.214554 kubelet[3936]: W0114 01:19:50.214523 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.214554 kubelet[3936]: E0114 01:19:50.214535 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.214772 kubelet[3936]: E0114 01:19:50.214653 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.214772 kubelet[3936]: W0114 01:19:50.214658 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.214772 kubelet[3936]: E0114 01:19:50.214665 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.214869 kubelet[3936]: E0114 01:19:50.214840 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.214869 kubelet[3936]: W0114 01:19:50.214867 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.214948 kubelet[3936]: E0114 01:19:50.214886 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215042 kubelet[3936]: E0114 01:19:50.215030 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215042 kubelet[3936]: W0114 01:19:50.215039 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.215114 kubelet[3936]: E0114 01:19:50.215054 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215178 kubelet[3936]: E0114 01:19:50.215167 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215178 kubelet[3936]: W0114 01:19:50.215175 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.215246 kubelet[3936]: E0114 01:19:50.215184 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215319 kubelet[3936]: E0114 01:19:50.215300 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215319 kubelet[3936]: W0114 01:19:50.215315 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.215367 kubelet[3936]: E0114 01:19:50.215324 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215604 kubelet[3936]: E0114 01:19:50.215532 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215604 kubelet[3936]: W0114 01:19:50.215545 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.215604 kubelet[3936]: E0114 01:19:50.215556 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215808 kubelet[3936]: E0114 01:19:50.215789 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215808 kubelet[3936]: W0114 01:19:50.215805 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.215864 kubelet[3936]: E0114 01:19:50.215817 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.215945 kubelet[3936]: E0114 01:19:50.215933 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.215945 kubelet[3936]: W0114 01:19:50.215942 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216058 kubelet[3936]: E0114 01:19:50.216035 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216091 kubelet[3936]: E0114 01:19:50.216082 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216091 kubelet[3936]: W0114 01:19:50.216087 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216163 kubelet[3936]: E0114 01:19:50.216149 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216269 kubelet[3936]: E0114 01:19:50.216248 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216269 kubelet[3936]: W0114 01:19:50.216266 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216320 kubelet[3936]: E0114 01:19:50.216275 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216442 kubelet[3936]: E0114 01:19:50.216416 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216442 kubelet[3936]: W0114 01:19:50.216438 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216486 kubelet[3936]: E0114 01:19:50.216449 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216733 kubelet[3936]: E0114 01:19:50.216708 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216780 kubelet[3936]: W0114 01:19:50.216735 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216780 kubelet[3936]: E0114 01:19:50.216745 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216880 kubelet[3936]: E0114 01:19:50.216866 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216880 kubelet[3936]: W0114 01:19:50.216872 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.216880 kubelet[3936]: E0114 01:19:50.216878 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.216980 kubelet[3936]: E0114 01:19:50.216969 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.216980 kubelet[3936]: W0114 01:19:50.216977 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.217041 kubelet[3936]: E0114 01:19:50.216983 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.217141 kubelet[3936]: E0114 01:19:50.217129 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.217141 kubelet[3936]: W0114 01:19:50.217138 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.217186 kubelet[3936]: E0114 01:19:50.217145 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.217397 kubelet[3936]: E0114 01:19:50.217374 3936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:19:50.217397 kubelet[3936]: W0114 01:19:50.217395 3936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:19:50.217445 kubelet[3936]: E0114 01:19:50.217402 3936 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:19:50.730965 containerd[2417]: time="2026-01-14T01:19:50.730927335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:50.733913 containerd[2417]: time="2026-01-14T01:19:50.733770408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 14 01:19:50.738385 containerd[2417]: time="2026-01-14T01:19:50.738040052Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:50.742806 containerd[2417]: time="2026-01-14T01:19:50.742757675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:50.743288 containerd[2417]: time="2026-01-14T01:19:50.743129707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.363849042s" Jan 14 01:19:50.743288 containerd[2417]: time="2026-01-14T01:19:50.743158355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:19:50.745263 containerd[2417]: time="2026-01-14T01:19:50.745232374Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:19:50.767233 containerd[2417]: time="2026-01-14T01:19:50.764144954Z" level=info msg="Container eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:50.794683 containerd[2417]: time="2026-01-14T01:19:50.794619780Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a\"" Jan 14 01:19:50.795132 containerd[2417]: time="2026-01-14T01:19:50.795108977Z" level=info msg="StartContainer for \"eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a\"" Jan 14 01:19:50.796805 containerd[2417]: time="2026-01-14T01:19:50.796779470Z" level=info msg="connecting to shim eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a" address="unix:///run/containerd/s/8644691cf6828f2645130f0db5b0b29988bb4e39d9fdff5270490c0914bd84cb" protocol=ttrpc version=3 Jan 14 01:19:50.818812 systemd[1]: Started cri-containerd-eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a.scope - libcontainer container eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a. Jan 14 01:19:50.848000 audit: BPF prog-id=190 op=LOAD Jan 14 01:19:50.848000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4472 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:50.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563636666376236383338336432313865393963356165333634393662 Jan 14 01:19:50.849000 audit: BPF prog-id=191 op=LOAD Jan 14 01:19:50.849000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4472 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:50.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563636666376236383338336432313865393963356165333634393662 Jan 14 01:19:50.849000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:19:50.849000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:50.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563636666376236383338336432313865393963356165333634393662 Jan 14 01:19:50.849000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:19:50.849000 audit[4592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:50.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563636666376236383338336432313865393963356165333634393662 Jan 14 01:19:50.849000 audit: BPF prog-id=192 op=LOAD Jan 14 01:19:50.849000 audit[4592]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4472 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:50.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563636666376236383338336432313865393963356165333634393662 Jan 14 01:19:50.871795 containerd[2417]: time="2026-01-14T01:19:50.871771143Z" level=info msg="StartContainer for \"eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a\" returns successfully" Jan 14 01:19:50.874534 systemd[1]: cri-containerd-eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a.scope: Deactivated successfully. Jan 14 01:19:50.876000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:19:50.877948 containerd[2417]: time="2026-01-14T01:19:50.877925826Z" level=info msg="received container exit event container_id:\"eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a\" id:\"eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a\" pid:4605 exited_at:{seconds:1768353590 nanos:877575246}" Jan 14 01:19:50.898701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eccff7b68383d218e99c5ae36496b6b9f300e218c4ca6c32f5f45017d7f9858a-rootfs.mount: Deactivated successfully. Jan 14 01:19:51.048661 kubelet[3936]: E0114 01:19:51.048306 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:51.132798 kubelet[3936]: I0114 01:19:51.132779 3936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:19:53.049784 kubelet[3936]: E0114 01:19:53.048882 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:54.144941 containerd[2417]: time="2026-01-14T01:19:54.144900362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:19:55.049328 kubelet[3936]: E0114 01:19:55.048364 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:55.213943 kubelet[3936]: I0114 01:19:55.213826 3936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:19:55.246000 audit[4643]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:55.249161 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 01:19:55.249222 kernel: audit: type=1325 audit(1768353595.246:592): table=filter:120 family=2 entries=21 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:55.246000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee14e1af0 a2=0 a3=7ffee14e1adc items=0 ppid=4040 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:55.259930 kernel: audit: type=1300 audit(1768353595.246:592): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee14e1af0 a2=0 a3=7ffee14e1adc items=0 ppid=4040 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:55.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:55.263602 kernel: audit: type=1327 audit(1768353595.246:592): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:55.252000 audit[4643]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:55.267220 kernel: audit: type=1325 audit(1768353595.252:593): table=nat:121 family=2 entries=19 op=nft_register_chain pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:19:55.252000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee14e1af0 a2=0 a3=7ffee14e1adc items=0 ppid=4040 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:55.272711 kernel: audit: type=1300 audit(1768353595.252:593): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee14e1af0 a2=0 a3=7ffee14e1adc items=0 ppid=4040 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:55.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:55.275881 kernel: audit: type=1327 audit(1768353595.252:593): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:19:57.049654 kubelet[3936]: E0114 01:19:57.049594 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:57.761469 containerd[2417]: time="2026-01-14T01:19:57.761425348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:57.768130 containerd[2417]: time="2026-01-14T01:19:57.768019410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:19:57.774547 containerd[2417]: time="2026-01-14T01:19:57.774518123Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:57.779929 containerd[2417]: time="2026-01-14T01:19:57.779879743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:19:57.780588 containerd[2417]: time="2026-01-14T01:19:57.780288151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.635343238s" Jan 14 01:19:57.780588 containerd[2417]: time="2026-01-14T01:19:57.780313403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:19:57.782550 containerd[2417]: time="2026-01-14T01:19:57.782522183Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:19:57.805448 containerd[2417]: time="2026-01-14T01:19:57.803496237Z" level=info msg="Container a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:19:57.826642 containerd[2417]: time="2026-01-14T01:19:57.826603010Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8\"" Jan 14 01:19:57.827280 containerd[2417]: time="2026-01-14T01:19:57.827233774Z" level=info msg="StartContainer for \"a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8\"" Jan 14 01:19:57.828904 containerd[2417]: time="2026-01-14T01:19:57.828868026Z" level=info msg="connecting to shim a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8" address="unix:///run/containerd/s/8644691cf6828f2645130f0db5b0b29988bb4e39d9fdff5270490c0914bd84cb" protocol=ttrpc version=3 Jan 14 01:19:57.851821 systemd[1]: Started cri-containerd-a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8.scope - libcontainer container a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8. Jan 14 01:19:57.891000 audit: BPF prog-id=193 op=LOAD Jan 14 01:19:57.891000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.900677 kernel: audit: type=1334 audit(1768353597.891:594): prog-id=193 op=LOAD Jan 14 01:19:57.900755 kernel: audit: type=1300 audit(1768353597.891:594): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.909809 kernel: audit: type=1327 audit(1768353597.891:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.891000 audit: BPF prog-id=194 op=LOAD Jan 14 01:19:57.891000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.913748 kernel: audit: type=1334 audit(1768353597.891:595): prog-id=194 op=LOAD Jan 14 01:19:57.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.891000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:19:57.891000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.891000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:19:57.891000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.891000 audit: BPF prog-id=195 op=LOAD Jan 14 01:19:57.891000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4472 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:57.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323931383864396535323063333363303037666135346336646231 Jan 14 01:19:57.932206 containerd[2417]: time="2026-01-14T01:19:57.932151398Z" level=info msg="StartContainer for \"a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8\" returns successfully" Jan 14 01:19:59.050680 kubelet[3936]: E0114 01:19:59.050331 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:19:59.200114 containerd[2417]: time="2026-01-14T01:19:59.200069473Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:19:59.201972 systemd[1]: cri-containerd-a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8.scope: Deactivated successfully. Jan 14 01:19:59.202271 systemd[1]: cri-containerd-a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8.scope: Consumed 434ms CPU time, 202.4M memory peak, 171.3M written to disk. Jan 14 01:19:59.203652 containerd[2417]: time="2026-01-14T01:19:59.203475040Z" level=info msg="received container exit event container_id:\"a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8\" id:\"a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8\" pid:4666 exited_at:{seconds:1768353599 nanos:203333062}" Jan 14 01:19:59.204000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:19:59.223941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a229188d9e520c33c007fa54c6db11bd63a7d0b1ac7e7819cc5b552d59ab5cf8-rootfs.mount: Deactivated successfully. Jan 14 01:19:59.287062 kubelet[3936]: I0114 01:19:59.286213 3936 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:19:59.337354 systemd[1]: Created slice kubepods-besteffort-pod1876e14b_df10_499c_9b9b_1ece31d0136a.slice - libcontainer container kubepods-besteffort-pod1876e14b_df10_499c_9b9b_1ece31d0136a.slice. Jan 14 01:19:59.350499 systemd[1]: Created slice kubepods-burstable-podb5b86644_851d_482c_92f0_3ce7f3e405fd.slice - libcontainer container kubepods-burstable-podb5b86644_851d_482c_92f0_3ce7f3e405fd.slice. Jan 14 01:19:59.360798 systemd[1]: Created slice kubepods-besteffort-poda185b488_e72a_4695_8538_5c10792b0a09.slice - libcontainer container kubepods-besteffort-poda185b488_e72a_4695_8538_5c10792b0a09.slice. Jan 14 01:19:59.368059 systemd[1]: Created slice kubepods-besteffort-podf6926f72_9c01_4e67_abef_2eb546c46570.slice - libcontainer container kubepods-besteffort-podf6926f72_9c01_4e67_abef_2eb546c46570.slice. Jan 14 01:19:59.374306 kubelet[3936]: I0114 01:19:59.374285 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b86644-851d-482c-92f0-3ce7f3e405fd-config-volume\") pod \"coredns-668d6bf9bc-2ppg7\" (UID: \"b5b86644-851d-482c-92f0-3ce7f3e405fd\") " pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:19:59.375129 kubelet[3936]: I0114 01:19:59.374849 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2t8\" (UniqueName: \"kubernetes.io/projected/f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5-kube-api-access-hh2t8\") pod \"goldmane-666569f655-dg8l8\" (UID: \"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5\") " pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:19:59.376425 kubelet[3936]: I0114 01:19:59.376403 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4f27767-b32c-43ae-95eb-f1d5e5f34f59-calico-apiserver-certs\") pod \"calico-apiserver-7758cb5d69-rt2pc\" (UID: \"b4f27767-b32c-43ae-95eb-f1d5e5f34f59\") " pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:19:59.377694 kubelet[3936]: I0114 01:19:59.377673 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmgs5\" (UniqueName: \"kubernetes.io/projected/b4f27767-b32c-43ae-95eb-f1d5e5f34f59-kube-api-access-dmgs5\") pod \"calico-apiserver-7758cb5d69-rt2pc\" (UID: \"b4f27767-b32c-43ae-95eb-f1d5e5f34f59\") " pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:19:59.377975 kubelet[3936]: I0114 01:19:59.377940 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsc9r\" (UniqueName: \"kubernetes.io/projected/b5b86644-851d-482c-92f0-3ce7f3e405fd-kube-api-access-vsc9r\") pod \"coredns-668d6bf9bc-2ppg7\" (UID: \"b5b86644-851d-482c-92f0-3ce7f3e405fd\") " pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:19:59.378143 kubelet[3936]: I0114 01:19:59.378133 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5-goldmane-ca-bundle\") pod \"goldmane-666569f655-dg8l8\" (UID: \"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5\") " pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:19:59.378246 systemd[1]: Created slice kubepods-burstable-pod4837b182_3864_4240_9f59_b7d855d0bb02.slice - libcontainer container kubepods-burstable-pod4837b182_3864_4240_9f59_b7d855d0bb02.slice. Jan 14 01:19:59.378438 kubelet[3936]: I0114 01:19:59.378417 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1876e14b-df10-499c-9b9b-1ece31d0136a-tigera-ca-bundle\") pod \"calico-kube-controllers-f77f5cb44-nf9jt\" (UID: \"1876e14b-df10-499c-9b9b-1ece31d0136a\") " pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:19:59.378793 kubelet[3936]: I0114 01:19:59.378772 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6926f72-9c01-4e67-abef-2eb546c46570-calico-apiserver-certs\") pod \"calico-apiserver-5d8778f546-mp9tk\" (UID: \"f6926f72-9c01-4e67-abef-2eb546c46570\") " pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" Jan 14 01:19:59.379038 kubelet[3936]: I0114 01:19:59.379025 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsz2\" (UniqueName: \"kubernetes.io/projected/1876e14b-df10-499c-9b9b-1ece31d0136a-kube-api-access-lnsz2\") pod \"calico-kube-controllers-f77f5cb44-nf9jt\" (UID: \"1876e14b-df10-499c-9b9b-1ece31d0136a\") " pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:19:59.379305 kubelet[3936]: I0114 01:19:59.379175 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29j5g\" (UniqueName: \"kubernetes.io/projected/a185b488-e72a-4695-8538-5c10792b0a09-kube-api-access-29j5g\") pod \"whisker-7d4596845d-9dx6l\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " pod="calico-system/whisker-7d4596845d-9dx6l" Jan 14 01:19:59.379305 kubelet[3936]: I0114 01:19:59.379204 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a185b488-e72a-4695-8538-5c10792b0a09-whisker-ca-bundle\") pod \"whisker-7d4596845d-9dx6l\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " pod="calico-system/whisker-7d4596845d-9dx6l" Jan 14 01:19:59.379488 kubelet[3936]: I0114 01:19:59.379476 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a185b488-e72a-4695-8538-5c10792b0a09-whisker-backend-key-pair\") pod \"whisker-7d4596845d-9dx6l\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " pod="calico-system/whisker-7d4596845d-9dx6l" Jan 14 01:19:59.379654 kubelet[3936]: I0114 01:19:59.379599 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689q6\" (UniqueName: \"kubernetes.io/projected/f6926f72-9c01-4e67-abef-2eb546c46570-kube-api-access-689q6\") pod \"calico-apiserver-5d8778f546-mp9tk\" (UID: \"f6926f72-9c01-4e67-abef-2eb546c46570\") " pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" Jan 14 01:19:59.379716 kubelet[3936]: I0114 01:19:59.379706 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5-config\") pod \"goldmane-666569f655-dg8l8\" (UID: \"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5\") " pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:19:59.379788 kubelet[3936]: I0114 01:19:59.379780 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5-goldmane-key-pair\") pod \"goldmane-666569f655-dg8l8\" (UID: \"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5\") " pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:19:59.379863 kubelet[3936]: I0114 01:19:59.379855 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mfgs\" (UniqueName: \"kubernetes.io/projected/73f6bb79-7f15-4fdc-bde4-bdf058188aed-kube-api-access-8mfgs\") pod \"calico-apiserver-5d8778f546-8gqr9\" (UID: \"73f6bb79-7f15-4fdc-bde4-bdf058188aed\") " pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:19:59.379931 kubelet[3936]: I0114 01:19:59.379923 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7wk\" (UniqueName: \"kubernetes.io/projected/4837b182-3864-4240-9f59-b7d855d0bb02-kube-api-access-sq7wk\") pod \"coredns-668d6bf9bc-ld4sb\" (UID: \"4837b182-3864-4240-9f59-b7d855d0bb02\") " pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:19:59.380778 kubelet[3936]: I0114 01:19:59.379994 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4837b182-3864-4240-9f59-b7d855d0bb02-config-volume\") pod \"coredns-668d6bf9bc-ld4sb\" (UID: \"4837b182-3864-4240-9f59-b7d855d0bb02\") " pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:19:59.380778 kubelet[3936]: I0114 01:19:59.380017 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73f6bb79-7f15-4fdc-bde4-bdf058188aed-calico-apiserver-certs\") pod \"calico-apiserver-5d8778f546-8gqr9\" (UID: \"73f6bb79-7f15-4fdc-bde4-bdf058188aed\") " pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:19:59.391524 systemd[1]: Created slice kubepods-besteffort-pod73f6bb79_7f15_4fdc_bde4_bdf058188aed.slice - libcontainer container kubepods-besteffort-pod73f6bb79_7f15_4fdc_bde4_bdf058188aed.slice. Jan 14 01:19:59.397233 systemd[1]: Created slice kubepods-besteffort-podf1cc4c9c_5d75_49d5_a28f_b34a79d2a4c5.slice - libcontainer container kubepods-besteffort-podf1cc4c9c_5d75_49d5_a28f_b34a79d2a4c5.slice. Jan 14 01:19:59.401997 systemd[1]: Created slice kubepods-besteffort-podb4f27767_b32c_43ae_95eb_f1d5e5f34f59.slice - libcontainer container kubepods-besteffort-podb4f27767_b32c_43ae_95eb_f1d5e5f34f59.slice. Jan 14 01:19:59.645106 containerd[2417]: time="2026-01-14T01:19:59.644997252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,}" Jan 14 01:19:59.656031 containerd[2417]: time="2026-01-14T01:19:59.655998798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:59.664799 containerd[2417]: time="2026-01-14T01:19:59.664773065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4596845d-9dx6l,Uid:a185b488-e72a-4695-8538-5c10792b0a09,Namespace:calico-system,Attempt:0,}" Jan 14 01:19:59.673924 containerd[2417]: time="2026-01-14T01:19:59.673811898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-mp9tk,Uid:f6926f72-9c01-4e67-abef-2eb546c46570,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:19:59.719089 containerd[2417]: time="2026-01-14T01:19:59.718548060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,}" Jan 14 01:19:59.719089 containerd[2417]: time="2026-01-14T01:19:59.718759407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:19:59.719605 containerd[2417]: time="2026-01-14T01:19:59.719583833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:19:59.719699 containerd[2417]: time="2026-01-14T01:19:59.719585680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:00.333682 containerd[2417]: time="2026-01-14T01:20:00.333142054Z" level=error msg="Failed to destroy network for sandbox \"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.376097 containerd[2417]: time="2026-01-14T01:20:00.376053902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.376494 kubelet[3936]: E0114 01:20:00.376450 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.376822 kubelet[3936]: E0114 01:20:00.376530 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:20:00.376822 kubelet[3936]: E0114 01:20:00.376551 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:20:00.376822 kubelet[3936]: E0114 01:20:00.376603 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ld4sb_kube-system(4837b182-3864-4240-9f59-b7d855d0bb02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ld4sb_kube-system(4837b182-3864-4240-9f59-b7d855d0bb02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"482802a6e128b9c3bf946711668add2506f6c48a9e9741b64a9d9111428d2468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ld4sb" podUID="4837b182-3864-4240-9f59-b7d855d0bb02" Jan 14 01:20:00.474925 containerd[2417]: time="2026-01-14T01:20:00.474882955Z" level=error msg="Failed to destroy network for sandbox \"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.475251 containerd[2417]: time="2026-01-14T01:20:00.475196904Z" level=error msg="Failed to destroy network for sandbox \"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.482603 containerd[2417]: time="2026-01-14T01:20:00.482562680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.483230 kubelet[3936]: E0114 01:20:00.482915 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.483230 kubelet[3936]: E0114 01:20:00.482964 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:20:00.483230 kubelet[3936]: E0114 01:20:00.482986 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:20:00.483366 kubelet[3936]: E0114 01:20:00.483030 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e83bc30bbdf431af63db8c426030003633190775453128fba0cde09b30ed005e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:00.491528 containerd[2417]: time="2026-01-14T01:20:00.491439749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.493961 kubelet[3936]: E0114 01:20:00.491749 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.493961 kubelet[3936]: E0114 01:20:00.493711 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:20:00.493961 kubelet[3936]: E0114 01:20:00.493737 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:20:00.494107 kubelet[3936]: E0114 01:20:00.493779 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2ppg7_kube-system(b5b86644-851d-482c-92f0-3ce7f3e405fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2ppg7_kube-system(b5b86644-851d-482c-92f0-3ce7f3e405fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2bb0b17a8f77e4c57e91fe9cecb9ad7115c408d8f81eb076dd0b1ba987a981a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2ppg7" podUID="b5b86644-851d-482c-92f0-3ce7f3e405fd" Jan 14 01:20:00.502526 containerd[2417]: time="2026-01-14T01:20:00.502494806Z" level=error msg="Failed to destroy network for sandbox \"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.503476 containerd[2417]: time="2026-01-14T01:20:00.503436313Z" level=error msg="Failed to destroy network for sandbox \"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.506042 containerd[2417]: time="2026-01-14T01:20:00.506002854Z" level=error msg="Failed to destroy network for sandbox \"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.509951 containerd[2417]: time="2026-01-14T01:20:00.509924182Z" level=error msg="Failed to destroy network for sandbox \"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.510982 containerd[2417]: time="2026-01-14T01:20:00.510454033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4596845d-9dx6l,Uid:a185b488-e72a-4695-8538-5c10792b0a09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.511080 kubelet[3936]: E0114 01:20:00.510603 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.511080 kubelet[3936]: E0114 01:20:00.510657 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4596845d-9dx6l" Jan 14 01:20:00.511080 kubelet[3936]: E0114 01:20:00.510678 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4596845d-9dx6l" Jan 14 01:20:00.511175 kubelet[3936]: E0114 01:20:00.510717 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d4596845d-9dx6l_calico-system(a185b488-e72a-4695-8538-5c10792b0a09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d4596845d-9dx6l_calico-system(a185b488-e72a-4695-8538-5c10792b0a09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"344fa0f0e69b3357240dfb0c116f81a784d09bed8d1689c884e4321e11281662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4596845d-9dx6l" podUID="a185b488-e72a-4695-8538-5c10792b0a09" Jan 14 01:20:00.512271 containerd[2417]: time="2026-01-14T01:20:00.512240795Z" level=error msg="Failed to destroy network for sandbox \"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.532692 containerd[2417]: time="2026-01-14T01:20:00.532655537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.532851 kubelet[3936]: E0114 01:20:00.532822 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.532909 kubelet[3936]: E0114 01:20:00.532866 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:20:00.532909 kubelet[3936]: E0114 01:20:00.532889 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:20:00.532964 kubelet[3936]: E0114 01:20:00.532944 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce694779d82919cc6c1410d6687f5c6fb2d43c7b41135b32b1fb962b466df76d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:00.536137 containerd[2417]: time="2026-01-14T01:20:00.536102641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-mp9tk,Uid:f6926f72-9c01-4e67-abef-2eb546c46570,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.536287 kubelet[3936]: E0114 01:20:00.536250 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.536333 kubelet[3936]: E0114 01:20:00.536300 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" Jan 14 01:20:00.536333 kubelet[3936]: E0114 01:20:00.536320 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" Jan 14 01:20:00.536426 kubelet[3936]: E0114 01:20:00.536359 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83832edc7f50edc6f8cb426adca37995ec00bb05330ddbc8a5b6f6793931f3e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:00.539408 containerd[2417]: time="2026-01-14T01:20:00.539370338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.539545 kubelet[3936]: E0114 01:20:00.539517 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.539598 kubelet[3936]: E0114 01:20:00.539561 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:20:00.539598 kubelet[3936]: E0114 01:20:00.539581 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:20:00.539665 kubelet[3936]: E0114 01:20:00.539616 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e185a6ceb95699dc7070943a1e625d9066ca063a5f39ea105df549e88ca5ecdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:00.542905 containerd[2417]: time="2026-01-14T01:20:00.542873148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.543039 kubelet[3936]: E0114 01:20:00.543022 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:00.543097 kubelet[3936]: E0114 01:20:00.543070 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:20:00.543097 kubelet[3936]: E0114 01:20:00.543088 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:20:00.543171 kubelet[3936]: E0114 01:20:00.543128 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3278c2a424cad2b400d2dcd7b1fd5cfb25322058fd3721be3f0ba84d303801f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:01.053473 systemd[1]: Created slice kubepods-besteffort-pod0ad549b6_0df1_4bac_8f3a_1bc2943edac4.slice - libcontainer container kubepods-besteffort-pod0ad549b6_0df1_4bac_8f3a_1bc2943edac4.slice. Jan 14 01:20:01.055742 containerd[2417]: time="2026-01-14T01:20:01.055707678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg7tj,Uid:0ad549b6-0df1-4bac-8f3a-1bc2943edac4,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:01.163827 containerd[2417]: time="2026-01-14T01:20:01.163502547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:20:01.173679 containerd[2417]: time="2026-01-14T01:20:01.173627968Z" level=error msg="Failed to destroy network for sandbox \"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:01.223058 systemd[1]: run-netns-cni\x2d2ffb2a42\x2dbdbe\x2d32c0\x2d1cc1\x2df7d2b550cd2d.mount: Deactivated successfully. Jan 14 01:20:01.223145 systemd[1]: run-netns-cni\x2d1d16f380\x2d20ed\x2ddf7d\x2df37a\x2d098413191c24.mount: Deactivated successfully. Jan 14 01:20:01.223202 systemd[1]: run-netns-cni\x2d2a050cb2\x2d6184\x2d1cf5\x2d158f\x2d6eae941a2944.mount: Deactivated successfully. Jan 14 01:20:01.223251 systemd[1]: run-netns-cni\x2d2070bd76\x2d3f2b\x2d3112\x2d8b22\x2dcf54612e7f41.mount: Deactivated successfully. Jan 14 01:20:01.224750 containerd[2417]: time="2026-01-14T01:20:01.224263517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg7tj,Uid:0ad549b6-0df1-4bac-8f3a-1bc2943edac4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:01.224973 kubelet[3936]: E0114 01:20:01.224888 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:01.224973 kubelet[3936]: E0114 01:20:01.224955 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:20:01.225062 kubelet[3936]: E0114 01:20:01.224977 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg7tj" Jan 14 01:20:01.225062 kubelet[3936]: E0114 01:20:01.225010 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed9225e664e5975e0179298a3a145b72bc1b040fa18a5e6fc14e833cf09bfdb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:20:11.049868 containerd[2417]: time="2026-01-14T01:20:11.049252234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:11.956192 containerd[2417]: time="2026-01-14T01:20:11.956140644Z" level=error msg="Failed to destroy network for sandbox \"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:11.958027 systemd[1]: run-netns-cni\x2dea6291f3\x2df874\x2d5eb6\x2dc9c6\x2dfb581b09de68.mount: Deactivated successfully. Jan 14 01:20:12.025343 containerd[2417]: time="2026-01-14T01:20:12.025098523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:12.025545 kubelet[3936]: E0114 01:20:12.025508 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:12.025838 kubelet[3936]: E0114 01:20:12.025570 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:20:12.025838 kubelet[3936]: E0114 01:20:12.025593 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" Jan 14 01:20:12.025838 kubelet[3936]: E0114 01:20:12.025771 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43ad5f31a879d6424d8a3999289f68af42fb387330d4d90a9a41fb7d9ef28d96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:12.049707 containerd[2417]: time="2026-01-14T01:20:12.049597460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:20:12.188234 containerd[2417]: time="2026-01-14T01:20:12.188190319Z" level=error msg="Failed to destroy network for sandbox \"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:12.190232 systemd[1]: run-netns-cni\x2dca02baf5\x2dbc25\x2d8133\x2d682b\x2d5125a6582445.mount: Deactivated successfully. Jan 14 01:20:12.201719 containerd[2417]: time="2026-01-14T01:20:12.201439724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:12.201850 kubelet[3936]: E0114 01:20:12.201787 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:12.201850 kubelet[3936]: E0114 01:20:12.201832 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:20:12.201937 kubelet[3936]: E0114 01:20:12.201851 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" Jan 14 01:20:12.201937 kubelet[3936]: E0114 01:20:12.201893 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc662f311a9ef07523c8cc3bfe3285687c43287841c3a3383a55b76060531bcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:13.051943 containerd[2417]: time="2026-01-14T01:20:13.051901913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:13.054052 containerd[2417]: time="2026-01-14T01:20:13.053862126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:13.255667 containerd[2417]: time="2026-01-14T01:20:13.255074292Z" level=error msg="Failed to destroy network for sandbox \"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.258349 systemd[1]: run-netns-cni\x2d817d9d3a\x2dedaa\x2db274\x2d08dd\x2d97622aa81634.mount: Deactivated successfully. Jan 14 01:20:13.281552 containerd[2417]: time="2026-01-14T01:20:13.281397893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.281685 kubelet[3936]: E0114 01:20:13.281555 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.281685 kubelet[3936]: E0114 01:20:13.281605 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:20:13.281943 kubelet[3936]: E0114 01:20:13.281625 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ppg7" Jan 14 01:20:13.282547 kubelet[3936]: E0114 01:20:13.281795 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2ppg7_kube-system(b5b86644-851d-482c-92f0-3ce7f3e405fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2ppg7_kube-system(b5b86644-851d-482c-92f0-3ce7f3e405fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5adb3d18abc1f0162bf9cfb1595f741d9a50bd60a5d7fabd43598aff8933c5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2ppg7" podUID="b5b86644-851d-482c-92f0-3ce7f3e405fd" Jan 14 01:20:13.296001 containerd[2417]: time="2026-01-14T01:20:13.295962523Z" level=error msg="Failed to destroy network for sandbox \"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.297854 systemd[1]: run-netns-cni\x2d86b16d3c\x2d6e88\x2dca92\x2df57c\x2d775ae08c9c8d.mount: Deactivated successfully. Jan 14 01:20:13.330550 containerd[2417]: time="2026-01-14T01:20:13.330466919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.331079 kubelet[3936]: E0114 01:20:13.330651 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:13.331079 kubelet[3936]: E0114 01:20:13.330699 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:20:13.331079 kubelet[3936]: E0114 01:20:13.330719 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ld4sb" Jan 14 01:20:13.331174 kubelet[3936]: E0114 01:20:13.330756 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ld4sb_kube-system(4837b182-3864-4240-9f59-b7d855d0bb02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ld4sb_kube-system(4837b182-3864-4240-9f59-b7d855d0bb02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3861ee3dbfdfb3108b437a55d91d2c9c4d6c372000d917bc33b56cbf8622af81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ld4sb" podUID="4837b182-3864-4240-9f59-b7d855d0bb02" Jan 14 01:20:14.048939 containerd[2417]: time="2026-01-14T01:20:14.048838884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:20:14.124587 containerd[2417]: time="2026-01-14T01:20:14.124542289Z" level=error msg="Failed to destroy network for sandbox \"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:14.127379 systemd[1]: run-netns-cni\x2dbeb1bf66\x2de6f0\x2d2dfe\x2d53ad\x2d253c5d143fb1.mount: Deactivated successfully. Jan 14 01:20:14.132747 containerd[2417]: time="2026-01-14T01:20:14.132710154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:14.133351 kubelet[3936]: E0114 01:20:14.133290 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:14.133577 kubelet[3936]: E0114 01:20:14.133480 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:20:14.133577 kubelet[3936]: E0114 01:20:14.133504 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" Jan 14 01:20:14.134023 kubelet[3936]: E0114 01:20:14.133667 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e2cf059f866fa69d01ad7bbec39ffc98eae6a58ed8061c8538747b481721859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:14.250818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75335830.mount: Deactivated successfully. Jan 14 01:20:14.372199 containerd[2417]: time="2026-01-14T01:20:14.372106937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.375169 containerd[2417]: time="2026-01-14T01:20:14.375141540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:20:14.419552 containerd[2417]: time="2026-01-14T01:20:14.419509711Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.467002 containerd[2417]: time="2026-01-14T01:20:14.466963629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.467563 containerd[2417]: time="2026-01-14T01:20:14.467537559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.30400089s" Jan 14 01:20:14.467616 containerd[2417]: time="2026-01-14T01:20:14.467568842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:20:14.481317 containerd[2417]: time="2026-01-14T01:20:14.481294357Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:20:14.775850 containerd[2417]: time="2026-01-14T01:20:14.775763580Z" level=info msg="Container 97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:14.931906 containerd[2417]: time="2026-01-14T01:20:14.931869719Z" level=info msg="CreateContainer within sandbox \"424e6e3669c94be93187671febb14b999066104c3b07d4fff0ac598618bb7f89\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508\"" Jan 14 01:20:14.932391 containerd[2417]: time="2026-01-14T01:20:14.932356814Z" level=info msg="StartContainer for \"97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508\"" Jan 14 01:20:14.933899 containerd[2417]: time="2026-01-14T01:20:14.933865669Z" level=info msg="connecting to shim 97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508" address="unix:///run/containerd/s/8644691cf6828f2645130f0db5b0b29988bb4e39d9fdff5270490c0914bd84cb" protocol=ttrpc version=3 Jan 14 01:20:14.952841 systemd[1]: Started cri-containerd-97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508.scope - libcontainer container 97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508. Jan 14 01:20:14.999000 audit: BPF prog-id=196 op=LOAD Jan 14 01:20:15.001174 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:20:15.001263 kernel: audit: type=1334 audit(1768353614.999:600): prog-id=196 op=LOAD Jan 14 01:20:14.999000 audit[5077]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:15.007583 kernel: audit: type=1300 audit(1768353614.999:600): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:14.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.014213 kernel: audit: type=1327 audit(1768353614.999:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.017168 kernel: audit: type=1334 audit(1768353614.999:601): prog-id=197 op=LOAD Jan 14 01:20:14.999000 audit: BPF prog-id=197 op=LOAD Jan 14 01:20:15.022577 kernel: audit: type=1300 audit(1768353614.999:601): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:14.999000 audit[5077]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:14.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.029904 kernel: audit: type=1327 audit(1768353614.999:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.029967 kernel: audit: type=1334 audit(1768353614.999:602): prog-id=197 op=UNLOAD Jan 14 01:20:14.999000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:20:14.999000 audit[5077]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:15.033226 kernel: audit: type=1300 audit(1768353614.999:602): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:15.040032 kernel: audit: type=1327 audit(1768353614.999:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:14.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.041472 kernel: audit: type=1334 audit(1768353614.999:603): prog-id=196 op=UNLOAD Jan 14 01:20:14.999000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:20:14.999000 audit[5077]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:14.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:14.999000 audit: BPF prog-id=198 op=LOAD Jan 14 01:20:14.999000 audit[5077]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4472 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:14.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937626338666435333136333262316462363361626465656138653831 Jan 14 01:20:15.049879 containerd[2417]: time="2026-01-14T01:20:15.049816389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-mp9tk,Uid:f6926f72-9c01-4e67-abef-2eb546c46570,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:20:15.051252 containerd[2417]: time="2026-01-14T01:20:15.051217802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg7tj,Uid:0ad549b6-0df1-4bac-8f3a-1bc2943edac4,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:15.051402 containerd[2417]: time="2026-01-14T01:20:15.051378786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:15.051551 containerd[2417]: time="2026-01-14T01:20:15.051502800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4596845d-9dx6l,Uid:a185b488-e72a-4695-8538-5c10792b0a09,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:15.053607 containerd[2417]: time="2026-01-14T01:20:15.053585142Z" level=info msg="StartContainer for \"97bc8fd531632b1db63abdeea8e8188caee8ffef3fdf1063f6dab61c93c8e508\" returns successfully" Jan 14 01:20:15.208767 kubelet[3936]: I0114 01:20:15.208717 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7m55" podStartSLOduration=1.7957038490000001 podStartE2EDuration="29.208700233s" podCreationTimestamp="2026-01-14 01:19:46 +0000 UTC" firstStartedPulling="2026-01-14 01:19:47.055325258 +0000 UTC m=+20.086256451" lastFinishedPulling="2026-01-14 01:20:14.468321632 +0000 UTC m=+47.499252835" observedRunningTime="2026-01-14 01:20:15.208246993 +0000 UTC m=+48.239178191" watchObservedRunningTime="2026-01-14 01:20:15.208700233 +0000 UTC m=+48.239631415" Jan 14 01:20:15.425949 containerd[2417]: time="2026-01-14T01:20:15.425866057Z" level=error msg="Failed to destroy network for sandbox \"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:15.428363 systemd[1]: run-netns-cni\x2dedf89b6e\x2d6d56\x2d28ed\x2de551\x2d740ebd357a01.mount: Deactivated successfully. Jan 14 01:20:15.642122 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:20:15.642230 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:20:16.634422 kubelet[3936]: I0114 01:20:16.634388 3936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:20:17.174919 containerd[2417]: time="2026-01-14T01:20:17.174827283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:17.177166 kubelet[3936]: E0114 01:20:17.177031 3936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:20:17.177840 kubelet[3936]: E0114 01:20:17.177415 3936 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:20:17.177840 kubelet[3936]: E0114 01:20:17.177443 3936 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dg8l8" Jan 14 01:20:17.177840 kubelet[3936]: E0114 01:20:17.177490 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e36420b770eaa0f86d25b994f82ef9225ff612111f0d80fbeb9ce6231d9f54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:17.398886 systemd-networkd[2060]: cali536072ac1bd: Link UP Jan 14 01:20:17.405691 systemd-networkd[2060]: cali536072ac1bd: Gained carrier Jan 14 01:20:17.442195 containerd[2417]: 2026-01-14 01:20:17.188 [INFO][5258] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:20:17.442195 containerd[2417]: 2026-01-14 01:20:17.224 [INFO][5258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0 calico-apiserver-5d8778f546- calico-apiserver f6926f72-9c01-4e67-abef-2eb546c46570 825 0 2026-01-14 01:19:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8778f546 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad calico-apiserver-5d8778f546-mp9tk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali536072ac1bd [] [] }} ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-" Jan 14 01:20:17.442195 containerd[2417]: 2026-01-14 01:20:17.224 [INFO][5258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.442195 containerd[2417]: 2026-01-14 01:20:17.292 [INFO][5332] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" HandleID="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.292 [INFO][5332] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" HandleID="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"calico-apiserver-5d8778f546-mp9tk", "timestamp":"2026-01-14 01:20:17.292385798 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.293 [INFO][5332] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.293 [INFO][5332] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.293 [INFO][5332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.302 [INFO][5332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.307 [INFO][5332] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.316 [INFO][5332] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.318 [INFO][5332] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443100 containerd[2417]: 2026-01-14 01:20:17.320 [INFO][5332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.320 [INFO][5332] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.322 [INFO][5332] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0 Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.330 [INFO][5332] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.340 [INFO][5332] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.1/26] block=192.168.108.0/26 handle="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.340 [INFO][5332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.1/26] handle="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.340 [INFO][5332] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:17.443317 containerd[2417]: 2026-01-14 01:20:17.341 [INFO][5332] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.1/26] IPv6=[] ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" HandleID="k8s-pod-network.f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.443468 containerd[2417]: 2026-01-14 01:20:17.349 [INFO][5258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0", GenerateName:"calico-apiserver-5d8778f546-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6926f72-9c01-4e67-abef-2eb546c46570", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8778f546", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"calico-apiserver-5d8778f546-mp9tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536072ac1bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.443527 containerd[2417]: 2026-01-14 01:20:17.350 [INFO][5258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.1/32] ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.443527 containerd[2417]: 2026-01-14 01:20:17.350 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali536072ac1bd ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.443527 containerd[2417]: 2026-01-14 01:20:17.406 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.443597 containerd[2417]: 2026-01-14 01:20:17.408 [INFO][5258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0", GenerateName:"calico-apiserver-5d8778f546-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6926f72-9c01-4e67-abef-2eb546c46570", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8778f546", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0", Pod:"calico-apiserver-5d8778f546-mp9tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536072ac1bd", MAC:"9e:05:f7:cf:8b:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.444805 containerd[2417]: 2026-01-14 01:20:17.439 [INFO][5258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-mp9tk" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--mp9tk-eth0" Jan 14 01:20:17.484743 systemd-networkd[2060]: calia444fea4ddd: Link UP Jan 14 01:20:17.484916 systemd-networkd[2060]: calia444fea4ddd: Gained carrier Jan 14 01:20:17.514762 containerd[2417]: 2026-01-14 01:20:17.230 [INFO][5305] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:20:17.514762 containerd[2417]: 2026-01-14 01:20:17.255 [INFO][5305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0 whisker-7d4596845d- calico-system a185b488-e72a-4695-8538-5c10792b0a09 886 0 2026-01-14 01:19:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d4596845d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad whisker-7d4596845d-9dx6l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia444fea4ddd [] [] }} ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-" Jan 14 01:20:17.514762 containerd[2417]: 2026-01-14 01:20:17.256 [INFO][5305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.514762 containerd[2417]: 2026-01-14 01:20:17.325 [INFO][5340] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.325 [INFO][5340] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"whisker-7d4596845d-9dx6l", "timestamp":"2026-01-14 01:20:17.325070943 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.326 [INFO][5340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.341 [INFO][5340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.341 [INFO][5340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.401 [INFO][5340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.441 [INFO][5340] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.450 [INFO][5340] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.452 [INFO][5340] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.514943 containerd[2417]: 2026-01-14 01:20:17.455 [INFO][5340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.455 [INFO][5340] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.456 [INFO][5340] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.461 [INFO][5340] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.472 [INFO][5340] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.2/26] block=192.168.108.0/26 handle="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.473 [INFO][5340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.2/26] handle="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.473 [INFO][5340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:17.515187 containerd[2417]: 2026-01-14 01:20:17.473 [INFO][5340] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.2/26] IPv6=[] ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.515337 containerd[2417]: 2026-01-14 01:20:17.476 [INFO][5305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0", GenerateName:"whisker-7d4596845d-", Namespace:"calico-system", SelfLink:"", UID:"a185b488-e72a-4695-8538-5c10792b0a09", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d4596845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"whisker-7d4596845d-9dx6l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia444fea4ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.515337 containerd[2417]: 2026-01-14 01:20:17.476 [INFO][5305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.2/32] ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.515427 containerd[2417]: 2026-01-14 01:20:17.476 [INFO][5305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia444fea4ddd ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.515427 containerd[2417]: 2026-01-14 01:20:17.485 [INFO][5305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.515473 containerd[2417]: 2026-01-14 01:20:17.486 [INFO][5305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0", GenerateName:"whisker-7d4596845d-", Namespace:"calico-system", SelfLink:"", UID:"a185b488-e72a-4695-8538-5c10792b0a09", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d4596845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a", Pod:"whisker-7d4596845d-9dx6l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia444fea4ddd", MAC:"06:df:0a:98:3d:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.515535 containerd[2417]: 2026-01-14 01:20:17.507 [INFO][5305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Namespace="calico-system" Pod="whisker-7d4596845d-9dx6l" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:17.527480 containerd[2417]: time="2026-01-14T01:20:17.526901979Z" level=info msg="connecting to shim f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0" address="unix:///run/containerd/s/72ee8c6c8cbebc35bbcbde7e2ca6a0c8d56a1fda7556c2dc122b661e21e75c6f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:17.585414 systemd[1]: Started cri-containerd-f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0.scope - libcontainer container f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0. Jan 14 01:20:17.610866 systemd-networkd[2060]: cali59dffc4b173: Link UP Jan 14 01:20:17.611434 systemd-networkd[2060]: cali59dffc4b173: Gained carrier Jan 14 01:20:17.617000 audit: BPF prog-id=199 op=LOAD Jan 14 01:20:17.618000 audit: BPF prog-id=200 op=LOAD Jan 14 01:20:17.618000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=201 op=LOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=202 op=LOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.620000 audit: BPF prog-id=203 op=LOAD Jan 14 01:20:17.620000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5398 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632653264366562366133323263653434316261316437326463636533 Jan 14 01:20:17.633946 containerd[2417]: time="2026-01-14T01:20:17.633908605Z" level=info msg="connecting to shim 97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" address="unix:///run/containerd/s/fbeca504ea02bcf7976e1a162baa119bde44b273d9d6ddee175de785a6a9c242" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:17.642810 containerd[2417]: 2026-01-14 01:20:17.243 [INFO][5288] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:20:17.642810 containerd[2417]: 2026-01-14 01:20:17.260 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0 csi-node-driver- calico-system 0ad549b6-0df1-4bac-8f3a-1bc2943edac4 704 0 2026-01-14 01:19:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad csi-node-driver-bg7tj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59dffc4b173 [] [] }} ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-" Jan 14 01:20:17.642810 containerd[2417]: 2026-01-14 01:20:17.260 [INFO][5288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.642810 containerd[2417]: 2026-01-14 01:20:17.328 [INFO][5345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" HandleID="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.328 [INFO][5345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" HandleID="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"csi-node-driver-bg7tj", "timestamp":"2026-01-14 01:20:17.32872488 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.329 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.473 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.474 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.509 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.539 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.551 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.555 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.642990 containerd[2417]: 2026-01-14 01:20:17.559 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.559 [INFO][5345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.562 [INFO][5345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.572 [INFO][5345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.584 [INFO][5345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.3/26] block=192.168.108.0/26 handle="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.584 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.3/26] handle="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.584 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:17.643195 containerd[2417]: 2026-01-14 01:20:17.585 [INFO][5345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.3/26] IPv6=[] ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" HandleID="k8s-pod-network.b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.643349 containerd[2417]: 2026-01-14 01:20:17.594 [INFO][5288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ad549b6-0df1-4bac-8f3a-1bc2943edac4", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"csi-node-driver-bg7tj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59dffc4b173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.643410 containerd[2417]: 2026-01-14 01:20:17.594 [INFO][5288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.3/32] ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.643410 containerd[2417]: 2026-01-14 01:20:17.594 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59dffc4b173 ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.643410 containerd[2417]: 2026-01-14 01:20:17.612 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.643479 containerd[2417]: 2026-01-14 01:20:17.621 [INFO][5288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ad549b6-0df1-4bac-8f3a-1bc2943edac4", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f", Pod:"csi-node-driver-bg7tj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59dffc4b173", MAC:"3a:54:08:d6:8f:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:17.643537 containerd[2417]: 2026-01-14 01:20:17.639 [INFO][5288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" Namespace="calico-system" Pod="csi-node-driver-bg7tj" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-csi--node--driver--bg7tj-eth0" Jan 14 01:20:17.672056 systemd[1]: Started cri-containerd-97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a.scope - libcontainer container 97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a. Jan 14 01:20:17.706849 containerd[2417]: time="2026-01-14T01:20:17.706778385Z" level=info msg="connecting to shim b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f" address="unix:///run/containerd/s/d5a9304dd3ac81ed4f0dc2778f2ebf2f8d964fb3b8de683e99122e6629963fe7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:17.711000 audit: BPF prog-id=204 op=LOAD Jan 14 01:20:17.711000 audit[5499]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe70c00660 a2=98 a3=1fffffffffffffff items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.711000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:20:17.711000 audit[5499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe70c00630 a3=0 items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.711000 audit: BPF prog-id=205 op=LOAD Jan 14 01:20:17.711000 audit[5499]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe70c00540 a2=94 a3=3 items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.712000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:20:17.712000 audit[5499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe70c00540 a2=94 a3=3 items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.712000 audit: BPF prog-id=206 op=LOAD Jan 14 01:20:17.712000 audit[5499]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe70c00580 a2=94 a3=7ffe70c00760 items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.712000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:20:17.712000 audit[5499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe70c00580 a2=94 a3=7ffe70c00760 items=0 ppid=5246 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:20:17.715000 audit: BPF prog-id=207 op=LOAD Jan 14 01:20:17.715000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff909bc180 a2=98 a3=3 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.715000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.715000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:20:17.715000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff909bc150 a3=0 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.715000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.715000 audit: BPF prog-id=208 op=LOAD Jan 14 01:20:17.715000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff909bbf70 a2=94 a3=54428f items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.715000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.716000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:20:17.716000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff909bbf70 a2=94 a3=54428f items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.716000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.716000 audit: BPF prog-id=209 op=LOAD Jan 14 01:20:17.716000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff909bbfa0 a2=94 a3=2 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.716000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.716000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:20:17.716000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff909bbfa0 a2=0 a3=2 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.716000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.749942 systemd[1]: Started cri-containerd-b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f.scope - libcontainer container b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f. Jan 14 01:20:17.778000 audit: BPF prog-id=210 op=LOAD Jan 14 01:20:17.779000 audit: BPF prog-id=211 op=LOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=212 op=LOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=213 op=LOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.779000 audit: BPF prog-id=214 op=LOAD Jan 14 01:20:17.779000 audit[5464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5447 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646533313135656637666363306338366336313935653964653765 Jan 14 01:20:17.797000 audit: BPF prog-id=215 op=LOAD Jan 14 01:20:17.798000 audit: BPF prog-id=216 op=LOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=217 op=LOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=218 op=LOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.798000 audit: BPF prog-id=219 op=LOAD Jan 14 01:20:17.798000 audit[5512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5491 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366232616134643062313161323262323839303431366534343330 Jan 14 01:20:17.810274 containerd[2417]: time="2026-01-14T01:20:17.809917728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-mp9tk,Uid:f6926f72-9c01-4e67-abef-2eb546c46570,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f2e2d6eb6a322ce441ba1d72dcce3446a67c18fb86aff5f3925e50d66c5001f0\"" Jan 14 01:20:17.812766 containerd[2417]: time="2026-01-14T01:20:17.812614933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:17.828111 containerd[2417]: time="2026-01-14T01:20:17.828080727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg7tj,Uid:0ad549b6-0df1-4bac-8f3a-1bc2943edac4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b26b2aa4d0b11a22b2890416e44306c5b60db4a055228c78e9f0082784298a5f\"" Jan 14 01:20:17.862454 containerd[2417]: time="2026-01-14T01:20:17.862424368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4596845d-9dx6l,Uid:a185b488-e72a-4695-8538-5c10792b0a09,Namespace:calico-system,Attempt:0,} returns sandbox id \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\"" Jan 14 01:20:17.934000 audit: BPF prog-id=220 op=LOAD Jan 14 01:20:17.934000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff909bbe60 a2=94 a3=1 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.934000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.934000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:20:17.934000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff909bbe60 a2=94 a3=1 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.934000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=221 op=LOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff909bbe50 a2=94 a3=4 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff909bbe50 a2=0 a3=4 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=222 op=LOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff909bbcb0 a2=94 a3=5 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff909bbcb0 a2=0 a3=5 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=223 op=LOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff909bbed0 a2=94 a3=6 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.943000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:20:17.943000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff909bbed0 a2=0 a3=6 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.944000 audit: BPF prog-id=224 op=LOAD Jan 14 01:20:17.944000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff909bb680 a2=94 a3=88 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.944000 audit: BPF prog-id=225 op=LOAD Jan 14 01:20:17.944000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff909bb500 a2=94 a3=2 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.944000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:20:17.944000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff909bb530 a2=0 a3=7fff909bb630 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.944000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:20:17.944000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=306a7d10 a2=0 a3=b24f86fec3421029 items=0 ppid=5246 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:20:17.950000 audit: BPF prog-id=226 op=LOAD Jan 14 01:20:17.950000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8539fed0 a2=98 a3=1999999999999999 items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.950000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:17.951000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:20:17.951000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8539fea0 a3=0 items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:17.951000 audit: BPF prog-id=227 op=LOAD Jan 14 01:20:17.951000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8539fdb0 a2=94 a3=ffff items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:17.951000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:20:17.951000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8539fdb0 a2=94 a3=ffff items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:17.951000 audit: BPF prog-id=228 op=LOAD Jan 14 01:20:17.951000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8539fdf0 a2=94 a3=7ffd8539ffd0 items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:17.951000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:20:17.951000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8539fdf0 a2=94 a3=7ffd8539ffd0 items=0 ppid=5246 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:17.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:20:18.068298 systemd-networkd[2060]: vxlan.calico: Link UP Jan 14 01:20:18.068493 systemd-networkd[2060]: vxlan.calico: Gained carrier Jan 14 01:20:18.076000 audit: BPF prog-id=229 op=LOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81233fd0 a2=98 a3=0 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff81233fa0 a3=0 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=230 op=LOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81233de0 a2=94 a3=54428f items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81233de0 a2=94 a3=54428f items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=231 op=LOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81233e10 a2=94 a3=2 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81233e10 a2=0 a3=2 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=232 op=LOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81233bc0 a2=94 a3=4 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff81233bc0 a2=94 a3=4 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=233 op=LOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81233cc0 a2=94 a3=7fff81233e40 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.076000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:20:18.076000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff81233cc0 a2=0 a3=7fff81233e40 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.077000 audit: BPF prog-id=234 op=LOAD Jan 14 01:20:18.077000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff812333f0 a2=94 a3=2 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.077000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:20:18.077000 audit[5577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff812333f0 a2=0 a3=2 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.077000 audit: BPF prog-id=235 op=LOAD Jan 14 01:20:18.077000 audit[5577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff812334f0 a2=94 a3=30 items=0 ppid=5246 pid=5577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:20:18.082000 audit: BPF prog-id=236 op=LOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd410efc50 a2=98 a3=0 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.082000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd410efc20 a3=0 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.082000 audit: BPF prog-id=237 op=LOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd410efa40 a2=94 a3=54428f items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.082000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd410efa40 a2=94 a3=54428f items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.082000 audit: BPF prog-id=238 op=LOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd410efa70 a2=94 a3=2 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.082000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:20:18.082000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd410efa70 a2=0 a3=2 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.086819 containerd[2417]: time="2026-01-14T01:20:18.086792394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:18.096609 containerd[2417]: time="2026-01-14T01:20:18.096511047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:18.096609 containerd[2417]: time="2026-01-14T01:20:18.096589796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:18.097420 kubelet[3936]: E0114 01:20:18.096845 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:18.097420 kubelet[3936]: E0114 01:20:18.096888 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:18.098005 kubelet[3936]: E0114 01:20:18.097115 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-689q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:18.098674 kubelet[3936]: E0114 01:20:18.098323 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:18.098744 containerd[2417]: time="2026-01-14T01:20:18.098617032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:20:18.202100 kubelet[3936]: E0114 01:20:18.201704 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:18.227000 audit: BPF prog-id=239 op=LOAD Jan 14 01:20:18.227000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd410ef930 a2=94 a3=1 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.227000 audit: BPF prog-id=239 op=UNLOAD Jan 14 01:20:18.227000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd410ef930 a2=94 a3=1 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.229000 audit[5588]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:18.229000 audit[5588]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff33393e50 a2=0 a3=7fff33393e3c items=0 ppid=4040 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:18.233000 audit[5588]: NETFILTER_CFG table=nat:123 family=2 entries=14 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:18.233000 audit[5588]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff33393e50 a2=0 a3=0 items=0 ppid=4040 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:18.237000 audit: BPF prog-id=240 op=LOAD Jan 14 01:20:18.237000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd410ef920 a2=94 a3=4 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.237000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:20:18.237000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd410ef920 a2=0 a3=4 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=241 op=LOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd410ef780 a2=94 a3=5 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd410ef780 a2=0 a3=5 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=242 op=LOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd410ef9a0 a2=94 a3=6 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd410ef9a0 a2=0 a3=6 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=243 op=LOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd410ef150 a2=94 a3=88 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=244 op=LOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd410eefd0 a2=94 a3=2 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.238000 audit: BPF prog-id=244 op=UNLOAD Jan 14 01:20:18.238000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd410ef000 a2=0 a3=7ffd410ef100 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.239000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:20:18.239000 audit[5581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=f519d10 a2=0 a3=91f9cbf2ec918651 items=0 ppid=5246 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:20:18.245000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:20:18.245000 audit[5246]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0013fb080 a2=0 a3=0 items=0 ppid=5226 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.245000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:20:18.319000 audit[5613]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=5613 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:18.319000 audit[5613]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe5ec7a440 a2=0 a3=7ffe5ec7a42c items=0 ppid=5246 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:18.323000 audit[5615]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5615 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:18.323000 audit[5615]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd3eed3bd0 a2=0 a3=7ffd3eed3bbc items=0 ppid=5246 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:18.342000 audit[5612]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5612 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:18.342000 audit[5612]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcd5706f60 a2=0 a3=7ffcd5706f4c items=0 ppid=5246 pid=5612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:18.348000 audit[5618]: NETFILTER_CFG table=filter:127 family=2 entries=170 op=nft_register_chain pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:18.348000 audit[5618]: SYSCALL arch=c000003e syscall=46 success=yes exit=97952 a0=3 a1=7ffc80b12950 a2=0 a3=7ffc80b1293c items=0 ppid=5246 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:18.348000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:18.379726 containerd[2417]: time="2026-01-14T01:20:18.379607036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:18.383351 containerd[2417]: time="2026-01-14T01:20:18.383311978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:20:18.383405 containerd[2417]: time="2026-01-14T01:20:18.383388902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:18.383508 kubelet[3936]: E0114 01:20:18.383484 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:18.383562 kubelet[3936]: E0114 01:20:18.383519 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:18.383773 kubelet[3936]: E0114 01:20:18.383724 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:18.384088 containerd[2417]: time="2026-01-14T01:20:18.383865921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:20:18.663703 containerd[2417]: time="2026-01-14T01:20:18.663587571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:18.666981 containerd[2417]: time="2026-01-14T01:20:18.666951082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:20:18.667085 containerd[2417]: time="2026-01-14T01:20:18.667022377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:18.667183 kubelet[3936]: E0114 01:20:18.667148 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:18.667256 kubelet[3936]: E0114 01:20:18.667197 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:18.667627 kubelet[3936]: E0114 01:20:18.667414 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:773bdd7ef7d541e28728652281bd8ea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29j5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d4596845d-9dx6l_calico-system(a185b488-e72a-4695-8538-5c10792b0a09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:18.667775 containerd[2417]: time="2026-01-14T01:20:18.667697851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:20:18.933399 containerd[2417]: time="2026-01-14T01:20:18.933265535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:18.937822 containerd[2417]: time="2026-01-14T01:20:18.937786932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:20:18.937914 containerd[2417]: time="2026-01-14T01:20:18.937867213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:18.938026 kubelet[3936]: E0114 01:20:18.937994 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:18.938067 kubelet[3936]: E0114 01:20:18.938039 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:18.938400 kubelet[3936]: E0114 01:20:18.938335 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:18.938537 containerd[2417]: time="2026-01-14T01:20:18.938522745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:20:18.940412 kubelet[3936]: E0114 01:20:18.940360 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:20:19.203414 kubelet[3936]: E0114 01:20:19.203314 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:19.204372 containerd[2417]: time="2026-01-14T01:20:19.204240509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:19.205318 kubelet[3936]: E0114 01:20:19.205282 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:20:19.212629 containerd[2417]: time="2026-01-14T01:20:19.212588824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:20:19.212738 containerd[2417]: time="2026-01-14T01:20:19.212594389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:19.212803 kubelet[3936]: E0114 01:20:19.212771 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:19.212844 kubelet[3936]: E0114 01:20:19.212815 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:19.212948 kubelet[3936]: E0114 01:20:19.212918 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29j5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d4596845d-9dx6l_calico-system(a185b488-e72a-4695-8538-5c10792b0a09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:19.214167 kubelet[3936]: E0114 01:20:19.214135 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d4596845d-9dx6l" podUID="a185b488-e72a-4695-8538-5c10792b0a09" Jan 14 01:20:19.238749 systemd-networkd[2060]: vxlan.calico: Gained IPv6LL Jan 14 01:20:19.302808 systemd-networkd[2060]: cali536072ac1bd: Gained IPv6LL Jan 14 01:20:19.430769 systemd-networkd[2060]: calia444fea4ddd: Gained IPv6LL Jan 14 01:20:19.558783 systemd-networkd[2060]: cali59dffc4b173: Gained IPv6LL Jan 14 01:20:20.203409 containerd[2417]: time="2026-01-14T01:20:20.203342949Z" level=info msg="StopPodSandbox for \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\"" Jan 14 01:20:20.212785 systemd[1]: cri-containerd-97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a.scope: Deactivated successfully. Jan 14 01:20:20.214000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:20:20.220057 kernel: kauditd_printk_skb: 275 callbacks suppressed Jan 14 01:20:20.220143 kernel: audit: type=1334 audit(1768353620.214:697): prog-id=210 op=UNLOAD Jan 14 01:20:20.220168 containerd[2417]: time="2026-01-14T01:20:20.217062056Z" level=info msg="received sandbox exit event container_id:\"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" id:\"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" exit_status:137 exited_at:{seconds:1768353620 nanos:216663225}" monitor_name=podsandbox Jan 14 01:20:20.214000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:20:20.222442 kernel: audit: type=1334 audit(1768353620.214:698): prog-id=214 op=UNLOAD Jan 14 01:20:20.243117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a-rootfs.mount: Deactivated successfully. Jan 14 01:20:20.244000 audit[5646]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:20.250338 kernel: audit: type=1325 audit(1768353620.244:699): table=filter:128 family=2 entries=20 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:20.250467 kernel: audit: type=1300 audit(1768353620.244:699): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6465e4f0 a2=0 a3=7fff6465e4dc items=0 ppid=4040 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:20.244000 audit[5646]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6465e4f0 a2=0 a3=7fff6465e4dc items=0 ppid=4040 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:20.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:20.256591 kernel: audit: type=1327 audit(1768353620.244:699): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:20.248000 audit[5646]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:20.259894 kernel: audit: type=1325 audit(1768353620.248:700): table=nat:129 family=2 entries=14 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:20.248000 audit[5646]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff6465e4f0 a2=0 a3=0 items=0 ppid=4040 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:20.263044 kernel: audit: type=1300 audit(1768353620.248:700): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff6465e4f0 a2=0 a3=0 items=0 ppid=4040 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:20.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:20.265658 kernel: audit: type=1327 audit(1768353620.248:700): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:22.682423 containerd[2417]: time="2026-01-14T01:20:22.682388232Z" level=info msg="shim disconnected" id=97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a namespace=k8s.io Jan 14 01:20:22.682423 containerd[2417]: time="2026-01-14T01:20:22.682418894Z" level=info msg="cleaning up after shim disconnected" id=97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a namespace=k8s.io Jan 14 01:20:22.683006 containerd[2417]: time="2026-01-14T01:20:22.682427536Z" level=info msg="cleaning up dead shim" id=97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a namespace=k8s.io Jan 14 01:20:22.692837 containerd[2417]: time="2026-01-14T01:20:22.692803626Z" level=info msg="received sandbox container exit event sandbox_id:\"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" exit_status:137 exited_at:{seconds:1768353620 nanos:216663225}" monitor_name=criService Jan 14 01:20:22.695466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a-shm.mount: Deactivated successfully. Jan 14 01:20:22.749212 systemd-networkd[2060]: calia444fea4ddd: Link DOWN Jan 14 01:20:22.749222 systemd-networkd[2060]: calia444fea4ddd: Lost carrier Jan 14 01:20:22.803000 audit[5690]: NETFILTER_CFG table=filter:130 family=2 entries=59 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:22.809660 kernel: audit: type=1325 audit(1768353622.803:701): table=filter:130 family=2 entries=59 op=nft_register_rule pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:22.803000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=4600 a0=3 a1=7ffc8bf300f0 a2=0 a3=7ffc8bf300dc items=0 ppid=5246 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:22.803000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:22.804000 audit[5690]: NETFILTER_CFG table=filter:131 family=2 entries=8 op=nft_unregister_chain pid=5690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:22.804000 audit[5690]: SYSCALL arch=c000003e syscall=46 success=yes exit=1136 a0=3 a1=7ffc8bf300f0 a2=0 a3=562fb1e11000 items=0 ppid=5246 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:22.816659 kernel: audit: type=1300 audit(1768353622.803:701): arch=c000003e syscall=46 success=yes exit=4600 a0=3 a1=7ffc8bf300f0 a2=0 a3=7ffc8bf300dc items=0 ppid=5246 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:22.804000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.746 [INFO][5670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.746 [INFO][5670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" iface="eth0" netns="/var/run/netns/cni-f80a4add-67b4-859c-7e0a-a6b15a111cd5" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.748 [INFO][5670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" iface="eth0" netns="/var/run/netns/cni-f80a4add-67b4-859c-7e0a-a6b15a111cd5" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.755 [INFO][5670] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" after=9.164767ms iface="eth0" netns="/var/run/netns/cni-f80a4add-67b4-859c-7e0a-a6b15a111cd5" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.755 [INFO][5670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.755 [INFO][5670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.798 [INFO][5678] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.798 [INFO][5678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:22.848693 containerd[2417]: 2026-01-14 01:20:22.799 [INFO][5678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:22.850702 containerd[2417]: 2026-01-14 01:20:22.844 [INFO][5678] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:22.850702 containerd[2417]: 2026-01-14 01:20:22.844 [INFO][5678] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:22.850702 containerd[2417]: 2026-01-14 01:20:22.845 [INFO][5678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:22.850702 containerd[2417]: 2026-01-14 01:20:22.846 [INFO][5670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:22.851045 containerd[2417]: time="2026-01-14T01:20:22.850823042Z" level=info msg="TearDown network for sandbox \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" successfully" Jan 14 01:20:22.851045 containerd[2417]: time="2026-01-14T01:20:22.850867520Z" level=info msg="StopPodSandbox for \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" returns successfully" Jan 14 01:20:22.851403 systemd[1]: run-netns-cni\x2df80a4add\x2d67b4\x2d859c\x2d7e0a\x2da6b15a111cd5.mount: Deactivated successfully. Jan 14 01:20:22.915721 kubelet[3936]: I0114 01:20:22.915617 3936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29j5g\" (UniqueName: \"kubernetes.io/projected/a185b488-e72a-4695-8538-5c10792b0a09-kube-api-access-29j5g\") pod \"a185b488-e72a-4695-8538-5c10792b0a09\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " Jan 14 01:20:22.916491 kubelet[3936]: I0114 01:20:22.915843 3936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a185b488-e72a-4695-8538-5c10792b0a09-whisker-backend-key-pair\") pod \"a185b488-e72a-4695-8538-5c10792b0a09\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " Jan 14 01:20:22.916491 kubelet[3936]: I0114 01:20:22.915933 3936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a185b488-e72a-4695-8538-5c10792b0a09-whisker-ca-bundle\") pod \"a185b488-e72a-4695-8538-5c10792b0a09\" (UID: \"a185b488-e72a-4695-8538-5c10792b0a09\") " Jan 14 01:20:22.917316 kubelet[3936]: I0114 01:20:22.917243 3936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a185b488-e72a-4695-8538-5c10792b0a09-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a185b488-e72a-4695-8538-5c10792b0a09" (UID: "a185b488-e72a-4695-8538-5c10792b0a09"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:20:22.922796 systemd[1]: var-lib-kubelet-pods-a185b488\x2de72a\x2d4695\x2d8538\x2d5c10792b0a09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29j5g.mount: Deactivated successfully. Jan 14 01:20:22.923027 kubelet[3936]: I0114 01:20:22.923004 3936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a185b488-e72a-4695-8538-5c10792b0a09-kube-api-access-29j5g" (OuterVolumeSpecName: "kube-api-access-29j5g") pod "a185b488-e72a-4695-8538-5c10792b0a09" (UID: "a185b488-e72a-4695-8538-5c10792b0a09"). InnerVolumeSpecName "kube-api-access-29j5g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:20:22.923079 kubelet[3936]: I0114 01:20:22.923001 3936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a185b488-e72a-4695-8538-5c10792b0a09-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a185b488-e72a-4695-8538-5c10792b0a09" (UID: "a185b488-e72a-4695-8538-5c10792b0a09"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:20:22.927004 systemd[1]: var-lib-kubelet-pods-a185b488\x2de72a\x2d4695\x2d8538\x2d5c10792b0a09-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:20:23.016567 kubelet[3936]: I0114 01:20:23.016540 3936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29j5g\" (UniqueName: \"kubernetes.io/projected/a185b488-e72a-4695-8538-5c10792b0a09-kube-api-access-29j5g\") on node \"ci-4578.0.0-p-dbef80f9ad\" DevicePath \"\"" Jan 14 01:20:23.016567 kubelet[3936]: I0114 01:20:23.016567 3936 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a185b488-e72a-4695-8538-5c10792b0a09-whisker-backend-key-pair\") on node \"ci-4578.0.0-p-dbef80f9ad\" DevicePath \"\"" Jan 14 01:20:23.016700 kubelet[3936]: I0114 01:20:23.016581 3936 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a185b488-e72a-4695-8538-5c10792b0a09-whisker-ca-bundle\") on node \"ci-4578.0.0-p-dbef80f9ad\" DevicePath \"\"" Jan 14 01:20:23.058106 systemd[1]: Removed slice kubepods-besteffort-poda185b488_e72a_4695_8538_5c10792b0a09.slice - libcontainer container kubepods-besteffort-poda185b488_e72a_4695_8538_5c10792b0a09.slice. Jan 14 01:20:23.334397 systemd[1]: Created slice kubepods-besteffort-poded6d8542_dfd5_4ecd_928d_cf86db3537f3.slice - libcontainer container kubepods-besteffort-poded6d8542_dfd5_4ecd_928d_cf86db3537f3.slice. Jan 14 01:20:23.349000 audit[5698]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5698 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:23.349000 audit[5698]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc40670720 a2=0 a3=7ffc4067070c items=0 ppid=4040 pid=5698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:23.355000 audit[5698]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5698 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:23.355000 audit[5698]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc40670720 a2=0 a3=0 items=0 ppid=4040 pid=5698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:23.419266 kubelet[3936]: I0114 01:20:23.419216 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed6d8542-dfd5-4ecd-928d-cf86db3537f3-whisker-backend-key-pair\") pod \"whisker-69445f69fb-5cr2q\" (UID: \"ed6d8542-dfd5-4ecd-928d-cf86db3537f3\") " pod="calico-system/whisker-69445f69fb-5cr2q" Jan 14 01:20:23.419266 kubelet[3936]: I0114 01:20:23.419260 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkfr\" (UniqueName: \"kubernetes.io/projected/ed6d8542-dfd5-4ecd-928d-cf86db3537f3-kube-api-access-dgkfr\") pod \"whisker-69445f69fb-5cr2q\" (UID: \"ed6d8542-dfd5-4ecd-928d-cf86db3537f3\") " pod="calico-system/whisker-69445f69fb-5cr2q" Jan 14 01:20:23.419427 kubelet[3936]: I0114 01:20:23.419284 3936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed6d8542-dfd5-4ecd-928d-cf86db3537f3-whisker-ca-bundle\") pod \"whisker-69445f69fb-5cr2q\" (UID: \"ed6d8542-dfd5-4ecd-928d-cf86db3537f3\") " pod="calico-system/whisker-69445f69fb-5cr2q" Jan 14 01:20:23.641229 containerd[2417]: time="2026-01-14T01:20:23.641137566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69445f69fb-5cr2q,Uid:ed6d8542-dfd5-4ecd-928d-cf86db3537f3,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:23.749907 systemd-networkd[2060]: cali182058a27dd: Link UP Jan 14 01:20:23.750094 systemd-networkd[2060]: cali182058a27dd: Gained carrier Jan 14 01:20:23.826939 containerd[2417]: 2026-01-14 01:20:23.688 [INFO][5707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0 whisker-69445f69fb- calico-system ed6d8542-dfd5-4ecd-928d-cf86db3537f3 981 0 2026-01-14 01:20:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69445f69fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad whisker-69445f69fb-5cr2q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali182058a27dd [] [] }} ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-" Jan 14 01:20:23.826939 containerd[2417]: 2026-01-14 01:20:23.688 [INFO][5707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.826939 containerd[2417]: 2026-01-14 01:20:23.710 [INFO][5720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" HandleID="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.710 [INFO][5720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" HandleID="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"whisker-69445f69fb-5cr2q", "timestamp":"2026-01-14 01:20:23.710600712 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.710 [INFO][5720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.710 [INFO][5720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.710 [INFO][5720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.715 [INFO][5720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.718 [INFO][5720] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.721 [INFO][5720] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.723 [INFO][5720] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.827467 containerd[2417]: 2026-01-14 01:20:23.724 [INFO][5720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.724 [INFO][5720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.725 [INFO][5720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145 Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.736 [INFO][5720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.745 [INFO][5720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.4/26] block=192.168.108.0/26 handle="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.745 [INFO][5720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.4/26] handle="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.745 [INFO][5720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:23.828807 containerd[2417]: 2026-01-14 01:20:23.745 [INFO][5720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.4/26] IPv6=[] ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" HandleID="k8s-pod-network.b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.829174 containerd[2417]: 2026-01-14 01:20:23.746 [INFO][5707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0", GenerateName:"whisker-69445f69fb-", Namespace:"calico-system", SelfLink:"", UID:"ed6d8542-dfd5-4ecd-928d-cf86db3537f3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69445f69fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"whisker-69445f69fb-5cr2q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali182058a27dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:23.829174 containerd[2417]: 2026-01-14 01:20:23.746 [INFO][5707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.4/32] ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.836399 containerd[2417]: 2026-01-14 01:20:23.746 [INFO][5707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali182058a27dd ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.836399 containerd[2417]: 2026-01-14 01:20:23.751 [INFO][5707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.836471 containerd[2417]: 2026-01-14 01:20:23.752 [INFO][5707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0", GenerateName:"whisker-69445f69fb-", Namespace:"calico-system", SelfLink:"", UID:"ed6d8542-dfd5-4ecd-928d-cf86db3537f3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69445f69fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145", Pod:"whisker-69445f69fb-5cr2q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali182058a27dd", MAC:"32:4f:2f:54:15:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:23.836599 containerd[2417]: 2026-01-14 01:20:23.822 [INFO][5707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" Namespace="calico-system" Pod="whisker-69445f69fb-5cr2q" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--69445f69fb--5cr2q-eth0" Jan 14 01:20:23.851000 audit[5734]: NETFILTER_CFG table=filter:134 family=2 entries=67 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:23.851000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=38112 a0=3 a1=7fff5be46470 a2=0 a3=7fff5be4645c items=0 ppid=5246 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.851000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:23.880810 containerd[2417]: time="2026-01-14T01:20:23.880771901Z" level=info msg="connecting to shim b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145" address="unix:///run/containerd/s/4809962a77c411785d3cb16f82a71bf5f14f7b0b66f5d102d91419c8a24f3474" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:23.908835 systemd[1]: Started cri-containerd-b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145.scope - libcontainer container b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145. Jan 14 01:20:23.915000 audit: BPF prog-id=245 op=LOAD Jan 14 01:20:23.916000 audit: BPF prog-id=246 op=LOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=247 op=LOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=248 op=LOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.916000 audit: BPF prog-id=249 op=LOAD Jan 14 01:20:23.916000 audit[5755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5742 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:23.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343565623538353565303464353035316235306263663662636661 Jan 14 01:20:23.956628 containerd[2417]: time="2026-01-14T01:20:23.956597326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69445f69fb-5cr2q,Uid:ed6d8542-dfd5-4ecd-928d-cf86db3537f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"b345eb5855e04d5051b50bcf6bcfa583c299830e0456ba4fc9e81b556e475145\"" Jan 14 01:20:23.958559 containerd[2417]: time="2026-01-14T01:20:23.958277404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:20:24.048620 containerd[2417]: time="2026-01-14T01:20:24.048579440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:20:24.154527 systemd-networkd[2060]: cali7df8f61defe: Link UP Jan 14 01:20:24.155263 systemd-networkd[2060]: cali7df8f61defe: Gained carrier Jan 14 01:20:24.170686 containerd[2417]: 2026-01-14 01:20:24.095 [INFO][5781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0 calico-apiserver-7758cb5d69- calico-apiserver b4f27767-b32c-43ae-95eb-f1d5e5f34f59 827 0 2026-01-14 01:19:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7758cb5d69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad calico-apiserver-7758cb5d69-rt2pc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7df8f61defe [] [] }} ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-" Jan 14 01:20:24.170686 containerd[2417]: 2026-01-14 01:20:24.095 [INFO][5781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.170686 containerd[2417]: 2026-01-14 01:20:24.114 [INFO][5792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" HandleID="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.114 [INFO][5792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" HandleID="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"calico-apiserver-7758cb5d69-rt2pc", "timestamp":"2026-01-14 01:20:24.114645899 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.114 [INFO][5792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.114 [INFO][5792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.114 [INFO][5792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.118 [INFO][5792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.121 [INFO][5792] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.124 [INFO][5792] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.126 [INFO][5792] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171296 containerd[2417]: 2026-01-14 01:20:24.127 [INFO][5792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.127 [INFO][5792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.128 [INFO][5792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107 Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.133 [INFO][5792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.143 [INFO][5792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.5/26] block=192.168.108.0/26 handle="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.143 [INFO][5792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.5/26] handle="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.143 [INFO][5792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:24.171549 containerd[2417]: 2026-01-14 01:20:24.143 [INFO][5792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.5/26] IPv6=[] ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" HandleID="k8s-pod-network.61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.173121 containerd[2417]: 2026-01-14 01:20:24.146 [INFO][5781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0", GenerateName:"calico-apiserver-7758cb5d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4f27767-b32c-43ae-95eb-f1d5e5f34f59", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758cb5d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"calico-apiserver-7758cb5d69-rt2pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df8f61defe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:24.173207 containerd[2417]: 2026-01-14 01:20:24.146 [INFO][5781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.5/32] ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.173207 containerd[2417]: 2026-01-14 01:20:24.146 [INFO][5781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7df8f61defe ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.173207 containerd[2417]: 2026-01-14 01:20:24.155 [INFO][5781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.173284 containerd[2417]: 2026-01-14 01:20:24.155 [INFO][5781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0", GenerateName:"calico-apiserver-7758cb5d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4f27767-b32c-43ae-95eb-f1d5e5f34f59", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758cb5d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107", Pod:"calico-apiserver-7758cb5d69-rt2pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df8f61defe", MAC:"ca:3a:9d:a0:2e:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:24.173349 containerd[2417]: 2026-01-14 01:20:24.167 [INFO][5781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" Namespace="calico-apiserver" Pod="calico-apiserver-7758cb5d69-rt2pc" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--7758cb5d69--rt2pc-eth0" Jan 14 01:20:24.179000 audit[5805]: NETFILTER_CFG table=filter:135 family=2 entries=41 op=nft_register_chain pid=5805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:24.179000 audit[5805]: SYSCALL arch=c000003e syscall=46 success=yes exit=23060 a0=3 a1=7ffc406501e0 a2=0 a3=7ffc406501cc items=0 ppid=5246 pid=5805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.179000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:24.219062 containerd[2417]: time="2026-01-14T01:20:24.219007689Z" level=info msg="connecting to shim 61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107" address="unix:///run/containerd/s/8a33dd5d7634c370a836c07db9f60b31ab0916881f693d673a8234e1001f9604" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:24.236806 systemd[1]: Started cri-containerd-61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107.scope - libcontainer container 61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107. Jan 14 01:20:24.240627 containerd[2417]: time="2026-01-14T01:20:24.240522802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:24.243786 containerd[2417]: time="2026-01-14T01:20:24.243672734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:20:24.244004 containerd[2417]: time="2026-01-14T01:20:24.243735761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:24.244249 kubelet[3936]: E0114 01:20:24.244200 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:24.244766 kubelet[3936]: E0114 01:20:24.244520 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:24.244766 kubelet[3936]: E0114 01:20:24.244710 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:773bdd7ef7d541e28728652281bd8ea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:24.245000 audit: BPF prog-id=250 op=LOAD Jan 14 01:20:24.245000 audit: BPF prog-id=251 op=LOAD Jan 14 01:20:24.245000 audit[5826]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.248028 containerd[2417]: time="2026-01-14T01:20:24.247591181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:20:24.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.246000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:20:24.246000 audit[5826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.247000 audit: BPF prog-id=252 op=LOAD Jan 14 01:20:24.247000 audit[5826]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.248000 audit: BPF prog-id=253 op=LOAD Jan 14 01:20:24.248000 audit[5826]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.248000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:20:24.248000 audit[5826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.248000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:20:24.248000 audit[5826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.249000 audit: BPF prog-id=254 op=LOAD Jan 14 01:20:24.249000 audit[5826]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5815 pid=5826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:24.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636331613863376638356538306461643935336666373465323362 Jan 14 01:20:24.289941 containerd[2417]: time="2026-01-14T01:20:24.289867016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758cb5d69-rt2pc,Uid:b4f27767-b32c-43ae-95eb-f1d5e5f34f59,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"61cc1a8c7f85e80dad953ff74e23b5c5c4c04d80da83cb7e83585a0b3b960107\"" Jan 14 01:20:24.524158 containerd[2417]: time="2026-01-14T01:20:24.524106651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:24.527391 containerd[2417]: time="2026-01-14T01:20:24.527345779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:20:24.527508 containerd[2417]: time="2026-01-14T01:20:24.527422664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:24.527609 kubelet[3936]: E0114 01:20:24.527558 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:24.527687 kubelet[3936]: E0114 01:20:24.527616 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:24.528391 kubelet[3936]: E0114 01:20:24.527816 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:24.528506 containerd[2417]: time="2026-01-14T01:20:24.527921923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:24.529931 kubelet[3936]: E0114 01:20:24.529863 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:20:24.799551 containerd[2417]: time="2026-01-14T01:20:24.799467232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:24.802543 containerd[2417]: time="2026-01-14T01:20:24.802507031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:24.802628 containerd[2417]: time="2026-01-14T01:20:24.802578112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:24.802764 kubelet[3936]: E0114 01:20:24.802727 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:24.802830 kubelet[3936]: E0114 01:20:24.802766 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:24.802965 kubelet[3936]: E0114 01:20:24.802917 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:24.804353 kubelet[3936]: E0114 01:20:24.804312 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:25.049833 containerd[2417]: time="2026-01-14T01:20:25.049585117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:25.051800 kubelet[3936]: I0114 01:20:25.051772 3936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a185b488-e72a-4695-8538-5c10792b0a09" path="/var/lib/kubelet/pods/a185b488-e72a-4695-8538-5c10792b0a09/volumes" Jan 14 01:20:25.150111 systemd-networkd[2060]: cali48047ad1d2d: Link UP Jan 14 01:20:25.151098 systemd-networkd[2060]: cali48047ad1d2d: Gained carrier Jan 14 01:20:25.165534 containerd[2417]: 2026-01-14 01:20:25.093 [INFO][5852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0 coredns-668d6bf9bc- kube-system b5b86644-851d-482c-92f0-3ce7f3e405fd 829 0 2026-01-14 01:19:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad coredns-668d6bf9bc-2ppg7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48047ad1d2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-" Jan 14 01:20:25.165534 containerd[2417]: 2026-01-14 01:20:25.093 [INFO][5852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.165534 containerd[2417]: 2026-01-14 01:20:25.114 [INFO][5864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" HandleID="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.114 [INFO][5864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" HandleID="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"coredns-668d6bf9bc-2ppg7", "timestamp":"2026-01-14 01:20:25.114275809 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.114 [INFO][5864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.114 [INFO][5864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.114 [INFO][5864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.119 [INFO][5864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.122 [INFO][5864] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.125 [INFO][5864] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.126 [INFO][5864] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.165773 containerd[2417]: 2026-01-14 01:20:25.128 [INFO][5864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.128 [INFO][5864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.129 [INFO][5864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.138 [INFO][5864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.145 [INFO][5864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.6/26] block=192.168.108.0/26 handle="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.146 [INFO][5864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.6/26] handle="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.146 [INFO][5864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:25.166085 containerd[2417]: 2026-01-14 01:20:25.146 [INFO][5864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.6/26] IPv6=[] ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" HandleID="k8s-pod-network.ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.147 [INFO][5852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5b86644-851d-482c-92f0-3ce7f3e405fd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"coredns-668d6bf9bc-2ppg7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48047ad1d2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.147 [INFO][5852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.6/32] ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.147 [INFO][5852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48047ad1d2d ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.151 [INFO][5852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.151 [INFO][5852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5b86644-851d-482c-92f0-3ce7f3e405fd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b", Pod:"coredns-668d6bf9bc-2ppg7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48047ad1d2d", MAC:"b6:2d:7c:6f:a2:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:25.166253 containerd[2417]: 2026-01-14 01:20:25.162 [INFO][5852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ppg7" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--2ppg7-eth0" Jan 14 01:20:25.176000 audit[5880]: NETFILTER_CFG table=filter:136 family=2 entries=50 op=nft_register_chain pid=5880 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:25.176000 audit[5880]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7fff3cf2e190 a2=0 a3=7fff3cf2e17c items=0 ppid=5246 pid=5880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.176000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:25.222952 kubelet[3936]: E0114 01:20:25.222915 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:25.226334 kubelet[3936]: E0114 01:20:25.226294 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:20:25.233104 containerd[2417]: time="2026-01-14T01:20:25.233044091Z" level=info msg="connecting to shim ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b" address="unix:///run/containerd/s/86c4858077fb323fd8fe6383aacf83bd71d0ca6b511daaae8248430c6b39603f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:25.255820 systemd[1]: Started cri-containerd-ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b.scope - libcontainer container ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b. Jan 14 01:20:25.275000 audit: BPF prog-id=255 op=LOAD Jan 14 01:20:25.277205 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 14 01:20:25.277256 kernel: audit: type=1334 audit(1768353625.275:724): prog-id=255 op=LOAD Jan 14 01:20:25.279000 audit: BPF prog-id=256 op=LOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.286976 kernel: audit: type=1334 audit(1768353625.279:725): prog-id=256 op=LOAD Jan 14 01:20:25.287037 kernel: audit: type=1300 audit(1768353625.279:725): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:20:25.296390 kernel: audit: type=1327 audit(1768353625.279:725): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.296444 kernel: audit: type=1334 audit(1768353625.279:726): prog-id=256 op=UNLOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.300289 kernel: audit: type=1300 audit(1768353625.279:726): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.307494 kernel: audit: type=1327 audit(1768353625.279:726): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.308887 kernel: audit: type=1334 audit(1768353625.279:727): prog-id=257 op=LOAD Jan 14 01:20:25.279000 audit: BPF prog-id=257 op=LOAD Jan 14 01:20:25.314161 kernel: audit: type=1300 audit(1768353625.279:727): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.319384 kernel: audit: type=1327 audit(1768353625.279:727): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: BPF prog-id=258 op=LOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.279000 audit: BPF prog-id=259 op=LOAD Jan 14 01:20:25.279000 audit[5902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5889 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643534663535386537663136666331303566303764373836616139 Jan 14 01:20:25.326000 audit[5922]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:25.326000 audit[5922]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5509c6c0 a2=0 a3=7ffe5509c6ac items=0 ppid=4040 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:25.330000 audit[5922]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:25.330000 audit[5922]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe5509c6c0 a2=0 a3=0 items=0 ppid=4040 pid=5922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:25.341000 audit[5932]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:25.341000 audit[5932]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc64f69110 a2=0 a3=7ffc64f690fc items=0 ppid=4040 pid=5932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:25.343000 audit[5932]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:25.343000 audit[5932]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc64f69110 a2=0 a3=0 items=0 ppid=4040 pid=5932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:25.347118 containerd[2417]: time="2026-01-14T01:20:25.347048440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ppg7,Uid:b5b86644-851d-482c-92f0-3ce7f3e405fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b\"" Jan 14 01:20:25.349160 containerd[2417]: time="2026-01-14T01:20:25.349133639Z" level=info msg="CreateContainer within sandbox \"ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:20:25.392957 containerd[2417]: time="2026-01-14T01:20:25.392866336Z" level=info msg="Container e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:25.408751 containerd[2417]: time="2026-01-14T01:20:25.408705953Z" level=info msg="CreateContainer within sandbox \"ded54f558e7f16fc105f07d786aa966241543e1496511fea395aee4c4214ac1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9\"" Jan 14 01:20:25.409842 containerd[2417]: time="2026-01-14T01:20:25.409136979Z" level=info msg="StartContainer for \"e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9\"" Jan 14 01:20:25.410153 containerd[2417]: time="2026-01-14T01:20:25.410127567Z" level=info msg="connecting to shim e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9" address="unix:///run/containerd/s/86c4858077fb323fd8fe6383aacf83bd71d0ca6b511daaae8248430c6b39603f" protocol=ttrpc version=3 Jan 14 01:20:25.427811 systemd[1]: Started cri-containerd-e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9.scope - libcontainer container e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9. Jan 14 01:20:25.436000 audit: BPF prog-id=260 op=LOAD Jan 14 01:20:25.436000 audit: BPF prog-id=261 op=LOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=261 op=UNLOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=262 op=LOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=263 op=LOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=263 op=UNLOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=262 op=UNLOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.436000 audit: BPF prog-id=264 op=LOAD Jan 14 01:20:25.436000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5889 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:25.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530313965646233333264636465656530316362326239633535623562 Jan 14 01:20:25.456806 containerd[2417]: time="2026-01-14T01:20:25.456783603Z" level=info msg="StartContainer for \"e019edb332dcdeee01cb2b9c55b5be3504d785d274f7516395bc4525b4d8f7c9\" returns successfully" Jan 14 01:20:25.510744 systemd-networkd[2060]: cali182058a27dd: Gained IPv6LL Jan 14 01:20:25.697262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262948741.mount: Deactivated successfully. Jan 14 01:20:26.049008 containerd[2417]: time="2026-01-14T01:20:26.048971069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:26.088352 systemd-networkd[2060]: cali7df8f61defe: Gained IPv6LL Jan 14 01:20:26.140272 systemd-networkd[2060]: cali683fb4e77f2: Link UP Jan 14 01:20:26.140539 systemd-networkd[2060]: cali683fb4e77f2: Gained carrier Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.089 [INFO][5969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0 calico-kube-controllers-f77f5cb44- calico-system 1876e14b-df10-499c-9b9b-1ece31d0136a 816 0 2026-01-14 01:19:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f77f5cb44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad calico-kube-controllers-f77f5cb44-nf9jt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali683fb4e77f2 [] [] }} ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.090 [INFO][5969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.110 [INFO][5980] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" HandleID="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.110 [INFO][5980] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" HandleID="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"calico-kube-controllers-f77f5cb44-nf9jt", "timestamp":"2026-01-14 01:20:26.110078407 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.110 [INFO][5980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.110 [INFO][5980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.110 [INFO][5980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.115 [INFO][5980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.118 [INFO][5980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.121 [INFO][5980] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.122 [INFO][5980] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.124 [INFO][5980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.124 [INFO][5980] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.125 [INFO][5980] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5 Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.128 [INFO][5980] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.135 [INFO][5980] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.7/26] block=192.168.108.0/26 handle="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.135 [INFO][5980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.7/26] handle="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.136 [INFO][5980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:26.156318 containerd[2417]: 2026-01-14 01:20:26.136 [INFO][5980] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.7/26] IPv6=[] ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" HandleID="k8s-pod-network.88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.138 [INFO][5969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0", GenerateName:"calico-kube-controllers-f77f5cb44-", Namespace:"calico-system", SelfLink:"", UID:"1876e14b-df10-499c-9b9b-1ece31d0136a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f77f5cb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"calico-kube-controllers-f77f5cb44-nf9jt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali683fb4e77f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.138 [INFO][5969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.7/32] ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.138 [INFO][5969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali683fb4e77f2 ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.140 [INFO][5969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.141 [INFO][5969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0", GenerateName:"calico-kube-controllers-f77f5cb44-", Namespace:"calico-system", SelfLink:"", UID:"1876e14b-df10-499c-9b9b-1ece31d0136a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f77f5cb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5", Pod:"calico-kube-controllers-f77f5cb44-nf9jt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali683fb4e77f2", MAC:"8e:3f:97:e2:19:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:26.159241 containerd[2417]: 2026-01-14 01:20:26.152 [INFO][5969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" Namespace="calico-system" Pod="calico-kube-controllers-f77f5cb44-nf9jt" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--kube--controllers--f77f5cb44--nf9jt-eth0" Jan 14 01:20:26.167000 audit[5994]: NETFILTER_CFG table=filter:141 family=2 entries=48 op=nft_register_chain pid=5994 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:26.167000 audit[5994]: SYSCALL arch=c000003e syscall=46 success=yes exit=23124 a0=3 a1=7ffe976ebbb0 a2=0 a3=7ffe976ebb9c items=0 ppid=5246 pid=5994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.167000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:26.203231 containerd[2417]: time="2026-01-14T01:20:26.203174540Z" level=info msg="connecting to shim 88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5" address="unix:///run/containerd/s/e0874f40cab1661a9979c7ad8546a8134ee89c7ca8ea66a12e4056f66395dd01" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:26.223304 kubelet[3936]: E0114 01:20:26.223093 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:26.226989 systemd[1]: Started cri-containerd-88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5.scope - libcontainer container 88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5. Jan 14 01:20:26.242000 audit: BPF prog-id=265 op=LOAD Jan 14 01:20:26.242000 audit: BPF prog-id=266 op=LOAD Jan 14 01:20:26.242000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=267 op=LOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=268 op=LOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=268 op=UNLOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=267 op=UNLOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.243000 audit: BPF prog-id=269 op=LOAD Jan 14 01:20:26.243000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6003 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838303137353134663130336337306261646566623766646438316666 Jan 14 01:20:26.249360 kubelet[3936]: I0114 01:20:26.249311 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2ppg7" podStartSLOduration=55.249292587 podStartE2EDuration="55.249292587s" podCreationTimestamp="2026-01-14 01:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:26.248757503 +0000 UTC m=+59.279688700" watchObservedRunningTime="2026-01-14 01:20:26.249292587 +0000 UTC m=+59.280223786" Jan 14 01:20:26.289000 audit[6043]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:26.289000 audit[6043]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff060724d0 a2=0 a3=7fff060724bc items=0 ppid=4040 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:26.293651 containerd[2417]: time="2026-01-14T01:20:26.293549242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f77f5cb44-nf9jt,Uid:1876e14b-df10-499c-9b9b-1ece31d0136a,Namespace:calico-system,Attempt:0,} returns sandbox id \"88017514f103c70badefb7fdd81ffe06c0abe8c3a4ef61e9728c8e19c2b109b5\"" Jan 14 01:20:26.296696 containerd[2417]: time="2026-01-14T01:20:26.296627141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:20:26.296000 audit[6043]: NETFILTER_CFG table=nat:143 family=2 entries=35 op=nft_register_chain pid=6043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:26.296000 audit[6043]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff060724d0 a2=0 a3=7fff060724bc items=0 ppid=4040 pid=6043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:26.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:26.566397 containerd[2417]: time="2026-01-14T01:20:26.566342529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:26.573190 containerd[2417]: time="2026-01-14T01:20:26.573130120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:20:26.573190 containerd[2417]: time="2026-01-14T01:20:26.573172244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:26.573354 kubelet[3936]: E0114 01:20:26.573315 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:20:26.573407 kubelet[3936]: E0114 01:20:26.573367 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:20:26.573553 kubelet[3936]: E0114 01:20:26.573497 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnsz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:26.574955 kubelet[3936]: E0114 01:20:26.574880 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:27.051239 containerd[2417]: time="2026-01-14T01:20:27.051191979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:27.059511 containerd[2417]: time="2026-01-14T01:20:27.059479628Z" level=info msg="StopPodSandbox for \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\"" Jan 14 01:20:27.111084 systemd-networkd[2060]: cali48047ad1d2d: Gained IPv6LL Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.100 [WARNING][6055] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.100 [INFO][6055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.100 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" iface="eth0" netns="" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.100 [INFO][6055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.100 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.134 [INFO][6076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.134 [INFO][6076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.134 [INFO][6076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.141 [WARNING][6076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.141 [INFO][6076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.142 [INFO][6076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:27.145718 containerd[2417]: 2026-01-14 01:20:27.144 [INFO][6055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.146332 containerd[2417]: time="2026-01-14T01:20:27.145831895Z" level=info msg="TearDown network for sandbox \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" successfully" Jan 14 01:20:27.146332 containerd[2417]: time="2026-01-14T01:20:27.146025924Z" level=info msg="StopPodSandbox for \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" returns successfully" Jan 14 01:20:27.146826 containerd[2417]: time="2026-01-14T01:20:27.146715656Z" level=info msg="RemovePodSandbox for \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\"" Jan 14 01:20:27.146826 containerd[2417]: time="2026-01-14T01:20:27.146760430Z" level=info msg="Forcibly stopping sandbox \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\"" Jan 14 01:20:27.185088 systemd-networkd[2060]: cali221c9d74e77: Link UP Jan 14 01:20:27.186450 systemd-networkd[2060]: cali221c9d74e77: Gained carrier Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.103 [INFO][6057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0 coredns-668d6bf9bc- kube-system 4837b182-3864-4240-9f59-b7d855d0bb02 828 0 2026-01-14 01:19:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad coredns-668d6bf9bc-ld4sb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali221c9d74e77 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.103 [INFO][6057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.139 [INFO][6081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" HandleID="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.139 [INFO][6081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" HandleID="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"coredns-668d6bf9bc-ld4sb", "timestamp":"2026-01-14 01:20:27.13968982 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.139 [INFO][6081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.142 [INFO][6081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.142 [INFO][6081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.148 [INFO][6081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.153 [INFO][6081] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.157 [INFO][6081] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.160 [INFO][6081] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.162 [INFO][6081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.162 [INFO][6081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.163 [INFO][6081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250 Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.168 [INFO][6081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.178 [INFO][6081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.8/26] block=192.168.108.0/26 handle="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.178 [INFO][6081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.8/26] handle="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.178 [INFO][6081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:27.206986 containerd[2417]: 2026-01-14 01:20:27.178 [INFO][6081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.8/26] IPv6=[] ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" HandleID="k8s-pod-network.dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.181 [INFO][6057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4837b182-3864-4240-9f59-b7d855d0bb02", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"coredns-668d6bf9bc-ld4sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali221c9d74e77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.181 [INFO][6057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.8/32] ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.181 [INFO][6057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali221c9d74e77 ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.187 [INFO][6057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.188 [INFO][6057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4837b182-3864-4240-9f59-b7d855d0bb02", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250", Pod:"coredns-668d6bf9bc-ld4sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali221c9d74e77", MAC:"6a:9d:b8:1b:2e:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:27.209341 containerd[2417]: 2026-01-14 01:20:27.203 [INFO][6057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" Namespace="kube-system" Pod="coredns-668d6bf9bc-ld4sb" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-coredns--668d6bf9bc--ld4sb-eth0" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.179 [WARNING][6098] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.179 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.179 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" iface="eth0" netns="" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.179 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.179 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.212 [INFO][6105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.212 [INFO][6105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.212 [INFO][6105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.217 [WARNING][6105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.217 [INFO][6105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" HandleID="k8s-pod-network.97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-whisker--7d4596845d--9dx6l-eth0" Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.219 [INFO][6105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:27.222570 containerd[2417]: 2026-01-14 01:20:27.220 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a" Jan 14 01:20:27.223662 containerd[2417]: time="2026-01-14T01:20:27.222544504Z" level=info msg="TearDown network for sandbox \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" successfully" Jan 14 01:20:27.227896 kubelet[3936]: E0114 01:20:27.227852 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:27.234453 containerd[2417]: time="2026-01-14T01:20:27.234414880Z" level=info msg="Ensure that sandbox 97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a in task-service has been cleanup successfully" Jan 14 01:20:27.234000 audit[6119]: NETFILTER_CFG table=filter:144 family=2 entries=48 op=nft_register_chain pid=6119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:27.234000 audit[6119]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffdbeadc3c0 a2=0 a3=7ffdbeadc3ac items=0 ppid=5246 pid=6119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.234000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:27.277070 containerd[2417]: time="2026-01-14T01:20:27.277037244Z" level=info msg="RemovePodSandbox \"97de3115ef7fcc0c86c6195e9de7e6c831080d98e4a4dd286471ed3bf78c8e2a\" returns successfully" Jan 14 01:20:27.278213 containerd[2417]: time="2026-01-14T01:20:27.278172077Z" level=info msg="connecting to shim dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250" address="unix:///run/containerd/s/907e969797333b697b56512310b0eace2bdcdf09a3f7d15aac6f865724417d8a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:27.297809 systemd[1]: Started cri-containerd-dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250.scope - libcontainer container dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250. Jan 14 01:20:27.305000 audit: BPF prog-id=270 op=LOAD Jan 14 01:20:27.305000 audit: BPF prog-id=271 op=LOAD Jan 14 01:20:27.305000 audit[6140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.305000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:20:27.305000 audit[6140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.306000 audit: BPF prog-id=272 op=LOAD Jan 14 01:20:27.306000 audit[6140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.306000 audit: BPF prog-id=273 op=LOAD Jan 14 01:20:27.306000 audit[6140]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.306000 audit: BPF prog-id=273 op=UNLOAD Jan 14 01:20:27.306000 audit[6140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.306000 audit: BPF prog-id=272 op=UNLOAD Jan 14 01:20:27.306000 audit[6140]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.306000 audit: BPF prog-id=274 op=LOAD Jan 14 01:20:27.306000 audit[6140]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=6128 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623166333731353036626539363436643165656563303637633338 Jan 14 01:20:27.336558 containerd[2417]: time="2026-01-14T01:20:27.336535246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ld4sb,Uid:4837b182-3864-4240-9f59-b7d855d0bb02,Namespace:kube-system,Attempt:0,} returns sandbox id \"dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250\"" Jan 14 01:20:27.338707 containerd[2417]: time="2026-01-14T01:20:27.338379982Z" level=info msg="CreateContainer within sandbox \"dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:20:27.363073 containerd[2417]: time="2026-01-14T01:20:27.363051742Z" level=info msg="Container 305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:27.367070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781348027.mount: Deactivated successfully. Jan 14 01:20:27.378232 containerd[2417]: time="2026-01-14T01:20:27.378208408Z" level=info msg="CreateContainer within sandbox \"dab1f371506be9646d1eeec067c38f5ba2058a88621be881c5ead848f5e85250\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d\"" Jan 14 01:20:27.378664 containerd[2417]: time="2026-01-14T01:20:27.378625746Z" level=info msg="StartContainer for \"305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d\"" Jan 14 01:20:27.379592 containerd[2417]: time="2026-01-14T01:20:27.379521643Z" level=info msg="connecting to shim 305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d" address="unix:///run/containerd/s/907e969797333b697b56512310b0eace2bdcdf09a3f7d15aac6f865724417d8a" protocol=ttrpc version=3 Jan 14 01:20:27.393801 systemd[1]: Started cri-containerd-305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d.scope - libcontainer container 305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d. Jan 14 01:20:27.401000 audit: BPF prog-id=275 op=LOAD Jan 14 01:20:27.401000 audit: BPF prog-id=276 op=LOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=276 op=UNLOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=277 op=LOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=278 op=LOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=278 op=UNLOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=277 op=UNLOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.401000 audit: BPF prog-id=279 op=LOAD Jan 14 01:20:27.401000 audit[6165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=6128 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:27.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330353832316338303462626533373034366339663434633333313037 Jan 14 01:20:27.424666 containerd[2417]: time="2026-01-14T01:20:27.424507675Z" level=info msg="StartContainer for \"305821c804bbe37046c9f44c33107a5023f5fedead97d8fb16ad4e5d4f4cb53d\" returns successfully" Jan 14 01:20:27.686774 systemd-networkd[2060]: cali683fb4e77f2: Gained IPv6LL Jan 14 01:20:28.230017 kubelet[3936]: E0114 01:20:28.229713 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:28.263925 systemd-networkd[2060]: cali221c9d74e77: Gained IPv6LL Jan 14 01:20:28.270000 audit[6198]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:28.270000 audit[6198]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5237b200 a2=0 a3=7ffc5237b1ec items=0 ppid=4040 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:28.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:28.276000 audit[6198]: NETFILTER_CFG table=nat:146 family=2 entries=44 op=nft_register_rule pid=6198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:28.276000 audit[6198]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc5237b200 a2=0 a3=7ffc5237b1ec items=0 ppid=4040 pid=6198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:28.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:29.243350 kubelet[3936]: I0114 01:20:29.243172 3936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ld4sb" podStartSLOduration=58.243153292 podStartE2EDuration="58.243153292s" podCreationTimestamp="2026-01-14 01:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:28.257910878 +0000 UTC m=+61.288842077" watchObservedRunningTime="2026-01-14 01:20:29.243153292 +0000 UTC m=+62.274084491" Jan 14 01:20:29.257000 audit[6206]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=6206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:29.257000 audit[6206]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8ddf25b0 a2=0 a3=7ffc8ddf259c items=0 ppid=4040 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:29.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:29.268000 audit[6206]: NETFILTER_CFG table=nat:148 family=2 entries=56 op=nft_register_chain pid=6206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:29.268000 audit[6206]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc8ddf25b0 a2=0 a3=7ffc8ddf259c items=0 ppid=4040 pid=6206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:29.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:30.048653 containerd[2417]: time="2026-01-14T01:20:30.048586847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:20:30.152895 systemd-networkd[2060]: calied08553de3f: Link UP Jan 14 01:20:30.153107 systemd-networkd[2060]: calied08553de3f: Gained carrier Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.090 [INFO][6209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0 calico-apiserver-5d8778f546- calico-apiserver 73f6bb79-7f15-4fdc-bde4-bdf058188aed 826 0 2026-01-14 01:19:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8778f546 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad calico-apiserver-5d8778f546-8gqr9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied08553de3f [] [] }} ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.090 [INFO][6209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.109 [INFO][6220] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" HandleID="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.110 [INFO][6220] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" HandleID="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"calico-apiserver-5d8778f546-8gqr9", "timestamp":"2026-01-14 01:20:30.109904635 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.110 [INFO][6220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.110 [INFO][6220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.110 [INFO][6220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.114 [INFO][6220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.118 [INFO][6220] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.121 [INFO][6220] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.123 [INFO][6220] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.125 [INFO][6220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.125 [INFO][6220] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.126 [INFO][6220] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60 Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.130 [INFO][6220] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.148 [INFO][6220] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.9/26] block=192.168.108.0/26 handle="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.148 [INFO][6220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.9/26] handle="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.148 [INFO][6220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:30.165564 containerd[2417]: 2026-01-14 01:20:30.148 [INFO][6220] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.9/26] IPv6=[] ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" HandleID="k8s-pod-network.e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.150 [INFO][6209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0", GenerateName:"calico-apiserver-5d8778f546-", Namespace:"calico-apiserver", SelfLink:"", UID:"73f6bb79-7f15-4fdc-bde4-bdf058188aed", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8778f546", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"calico-apiserver-5d8778f546-8gqr9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied08553de3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.150 [INFO][6209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.9/32] ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.150 [INFO][6209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied08553de3f ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.153 [INFO][6209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.153 [INFO][6209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0", GenerateName:"calico-apiserver-5d8778f546-", Namespace:"calico-apiserver", SelfLink:"", UID:"73f6bb79-7f15-4fdc-bde4-bdf058188aed", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8778f546", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60", Pod:"calico-apiserver-5d8778f546-8gqr9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied08553de3f", MAC:"a6:77:ff:5c:cc:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:30.167074 containerd[2417]: 2026-01-14 01:20:30.163 [INFO][6209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" Namespace="calico-apiserver" Pod="calico-apiserver-5d8778f546-8gqr9" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-calico--apiserver--5d8778f546--8gqr9-eth0" Jan 14 01:20:30.178000 audit[6235]: NETFILTER_CFG table=filter:149 family=2 entries=57 op=nft_register_chain pid=6235 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:30.178000 audit[6235]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7fff1ccefe30 a2=0 a3=7fff1ccefe1c items=0 ppid=5246 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.178000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:30.225562 containerd[2417]: time="2026-01-14T01:20:30.225518775Z" level=info msg="connecting to shim e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60" address="unix:///run/containerd/s/bc0b577fae915a44d2b51032f597178488fb537912dd55f0625a2f0c9ac944fe" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:30.249804 systemd[1]: Started cri-containerd-e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60.scope - libcontainer container e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60. Jan 14 01:20:30.261000 audit: BPF prog-id=280 op=LOAD Jan 14 01:20:30.262000 audit: BPF prog-id=281 op=LOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=281 op=UNLOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=282 op=LOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=283 op=LOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=283 op=UNLOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=282 op=UNLOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.262000 audit: BPF prog-id=284 op=LOAD Jan 14 01:20:30.262000 audit[6255]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6244 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:30.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616362353339653130656366303435303735306363663837363062 Jan 14 01:20:30.292150 containerd[2417]: time="2026-01-14T01:20:30.292125195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8778f546-8gqr9,Uid:73f6bb79-7f15-4fdc-bde4-bdf058188aed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e1acb539e10ecf0450750ccf8760b1ec2dadaf5b908b35d2e744746e7c57fc60\"" Jan 14 01:20:30.293448 containerd[2417]: time="2026-01-14T01:20:30.293384919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:30.574830 containerd[2417]: time="2026-01-14T01:20:30.574781485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:30.578797 containerd[2417]: time="2026-01-14T01:20:30.578705376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:30.578797 containerd[2417]: time="2026-01-14T01:20:30.578729539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:30.578936 kubelet[3936]: E0114 01:20:30.578887 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:30.579354 kubelet[3936]: E0114 01:20:30.578966 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:30.579354 kubelet[3936]: E0114 01:20:30.579095 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mfgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:30.580413 kubelet[3936]: E0114 01:20:30.580372 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:31.240073 kubelet[3936]: E0114 01:20:31.240039 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:31.275681 kernel: kauditd_printk_skb: 161 callbacks suppressed Jan 14 01:20:31.275770 kernel: audit: type=1325 audit(1768353631.270:785): table=filter:150 family=2 entries=14 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:31.270000 audit[6280]: NETFILTER_CFG table=filter:150 family=2 entries=14 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:31.270000 audit[6280]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd8632ddc0 a2=0 a3=7ffd8632ddac items=0 ppid=4040 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:31.281431 kernel: audit: type=1300 audit(1768353631.270:785): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd8632ddc0 a2=0 a3=7ffd8632ddac items=0 ppid=4040 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:31.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:31.286425 kernel: audit: type=1327 audit(1768353631.270:785): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:31.282000 audit[6280]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:31.289510 kernel: audit: type=1325 audit(1768353631.282:786): table=nat:151 family=2 entries=20 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:31.282000 audit[6280]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8632ddc0 a2=0 a3=7ffd8632ddac items=0 ppid=4040 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:31.294343 kernel: audit: type=1300 audit(1768353631.282:786): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8632ddc0 a2=0 a3=7ffd8632ddac items=0 ppid=4040 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:31.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:31.298741 kernel: audit: type=1327 audit(1768353631.282:786): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:32.038785 systemd-networkd[2060]: calied08553de3f: Gained IPv6LL Jan 14 01:20:32.049281 containerd[2417]: time="2026-01-14T01:20:32.049247723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,}" Jan 14 01:20:32.151597 systemd-networkd[2060]: cali1c8cddb95c4: Link UP Jan 14 01:20:32.152282 systemd-networkd[2060]: cali1c8cddb95c4: Gained carrier Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.096 [INFO][6282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0 goldmane-666569f655- calico-system f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5 819 0 2026-01-14 01:19:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578.0.0-p-dbef80f9ad goldmane-666569f655-dg8l8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1c8cddb95c4 [] [] }} ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.096 [INFO][6282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.118 [INFO][6293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" HandleID="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.118 [INFO][6293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" HandleID="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-dbef80f9ad", "pod":"goldmane-666569f655-dg8l8", "timestamp":"2026-01-14 01:20:32.118607041 +0000 UTC"}, Hostname:"ci-4578.0.0-p-dbef80f9ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.118 [INFO][6293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.118 [INFO][6293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.118 [INFO][6293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-dbef80f9ad' Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.123 [INFO][6293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.127 [INFO][6293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.130 [INFO][6293] ipam/ipam.go 511: Trying affinity for 192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.132 [INFO][6293] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.133 [INFO][6293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.134 [INFO][6293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.135 [INFO][6293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0 Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.139 [INFO][6293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.147 [INFO][6293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.108.10/26] block=192.168.108.0/26 handle="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.147 [INFO][6293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.10/26] handle="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" host="ci-4578.0.0-p-dbef80f9ad" Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.147 [INFO][6293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:20:32.165510 containerd[2417]: 2026-01-14 01:20:32.147 [INFO][6293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.108.10/26] IPv6=[] ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" HandleID="k8s-pod-network.36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Workload="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.149 [INFO][6282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"", Pod:"goldmane-666569f655-dg8l8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c8cddb95c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.149 [INFO][6282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.10/32] ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.149 [INFO][6282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c8cddb95c4 ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.152 [INFO][6282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.153 [INFO][6282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-dbef80f9ad", ContainerID:"36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0", Pod:"goldmane-666569f655-dg8l8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c8cddb95c4", MAC:"72:04:36:34:fe:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:20:32.166292 containerd[2417]: 2026-01-14 01:20:32.162 [INFO][6282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" Namespace="calico-system" Pod="goldmane-666569f655-dg8l8" WorkloadEndpoint="ci--4578.0.0--p--dbef80f9ad-k8s-goldmane--666569f655--dg8l8-eth0" Jan 14 01:20:32.185000 audit[6308]: NETFILTER_CFG table=filter:152 family=2 entries=74 op=nft_register_chain pid=6308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:32.192097 kernel: audit: type=1325 audit(1768353632.185:787): table=filter:152 family=2 entries=74 op=nft_register_chain pid=6308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:20:32.185000 audit[6308]: SYSCALL arch=c000003e syscall=46 success=yes exit=35144 a0=3 a1=7ffd43cdc400 a2=0 a3=7ffd43cdc3ec items=0 ppid=5246 pid=6308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.202582 kernel: audit: type=1300 audit(1768353632.185:787): arch=c000003e syscall=46 success=yes exit=35144 a0=3 a1=7ffd43cdc400 a2=0 a3=7ffd43cdc3ec items=0 ppid=5246 pid=6308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.202662 kernel: audit: type=1327 audit(1768353632.185:787): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:32.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:20:32.219027 containerd[2417]: time="2026-01-14T01:20:32.218956290Z" level=info msg="connecting to shim 36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0" address="unix:///run/containerd/s/5f3d2c86b00cc2041099da304c1ad9d494b2834ed2d98cc6fa115db98eb1826d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:32.242437 kubelet[3936]: E0114 01:20:32.242407 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:32.243861 systemd[1]: Started cri-containerd-36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0.scope - libcontainer container 36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0. Jan 14 01:20:32.256000 audit: BPF prog-id=285 op=LOAD Jan 14 01:20:32.260113 kernel: audit: type=1334 audit(1768353632.256:788): prog-id=285 op=LOAD Jan 14 01:20:32.260000 audit: BPF prog-id=286 op=LOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=286 op=UNLOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=287 op=LOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=288 op=LOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=288 op=UNLOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=287 op=UNLOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.260000 audit: BPF prog-id=289 op=LOAD Jan 14 01:20:32.260000 audit[6331]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6319 pid=6331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:32.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393431663738303932373532356565626431393964613561643661 Jan 14 01:20:32.291267 containerd[2417]: time="2026-01-14T01:20:32.291190343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dg8l8,Uid:f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"36941f780927525eebd199da5ad6ae27a9df4292e8079ea2475ca950c889b2c0\"" Jan 14 01:20:32.293065 containerd[2417]: time="2026-01-14T01:20:32.293011680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:20:32.555424 containerd[2417]: time="2026-01-14T01:20:32.555331574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:32.573779 containerd[2417]: time="2026-01-14T01:20:32.573750206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:20:32.573873 containerd[2417]: time="2026-01-14T01:20:32.573815400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:32.573943 kubelet[3936]: E0114 01:20:32.573911 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:20:32.573989 kubelet[3936]: E0114 01:20:32.573952 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:20:32.574147 kubelet[3936]: E0114 01:20:32.574081 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:32.575272 kubelet[3936]: E0114 01:20:32.575244 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:33.049927 containerd[2417]: time="2026-01-14T01:20:33.049815991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:33.244221 kubelet[3936]: E0114 01:20:33.244011 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:33.281000 audit[6356]: NETFILTER_CFG table=filter:153 family=2 entries=14 op=nft_register_rule pid=6356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:33.281000 audit[6356]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffee1b94f0 a2=0 a3=7fffee1b94dc items=0 ppid=4040 pid=6356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:33.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:33.288000 audit[6356]: NETFILTER_CFG table=nat:154 family=2 entries=20 op=nft_register_rule pid=6356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:33.288000 audit[6356]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffee1b94f0 a2=0 a3=7fffee1b94dc items=0 ppid=4040 pid=6356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:33.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:33.312888 containerd[2417]: time="2026-01-14T01:20:33.312745108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:33.316208 containerd[2417]: time="2026-01-14T01:20:33.316179421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:33.316291 containerd[2417]: time="2026-01-14T01:20:33.316235409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:33.316354 kubelet[3936]: E0114 01:20:33.316326 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:33.316414 kubelet[3936]: E0114 01:20:33.316363 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:33.316506 kubelet[3936]: E0114 01:20:33.316475 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-689q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:33.317850 kubelet[3936]: E0114 01:20:33.317784 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:34.022785 systemd-networkd[2060]: cali1c8cddb95c4: Gained IPv6LL Jan 14 01:20:34.049445 containerd[2417]: time="2026-01-14T01:20:34.049412641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:20:34.245871 kubelet[3936]: E0114 01:20:34.245671 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:34.327447 containerd[2417]: time="2026-01-14T01:20:34.327365645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:34.331496 containerd[2417]: time="2026-01-14T01:20:34.331461104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:20:34.331578 containerd[2417]: time="2026-01-14T01:20:34.331468855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:34.331691 kubelet[3936]: E0114 01:20:34.331623 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:34.331740 kubelet[3936]: E0114 01:20:34.331688 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:34.331835 kubelet[3936]: E0114 01:20:34.331800 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:34.333958 containerd[2417]: time="2026-01-14T01:20:34.333931138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:20:34.591745 containerd[2417]: time="2026-01-14T01:20:34.591661682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:34.596163 containerd[2417]: time="2026-01-14T01:20:34.596126264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:20:34.596248 containerd[2417]: time="2026-01-14T01:20:34.596192952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:34.596342 kubelet[3936]: E0114 01:20:34.596302 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:34.596401 kubelet[3936]: E0114 01:20:34.596352 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:34.596502 kubelet[3936]: E0114 01:20:34.596470 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:34.597963 kubelet[3936]: E0114 01:20:34.597860 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:20:39.051773 containerd[2417]: time="2026-01-14T01:20:39.051724771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:20:39.328561 containerd[2417]: time="2026-01-14T01:20:39.328465650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:39.331605 containerd[2417]: time="2026-01-14T01:20:39.331571034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:20:39.331749 containerd[2417]: time="2026-01-14T01:20:39.331623889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:39.331851 kubelet[3936]: E0114 01:20:39.331810 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:20:39.332210 kubelet[3936]: E0114 01:20:39.331860 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:20:39.332210 kubelet[3936]: E0114 01:20:39.332054 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnsz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:39.332354 containerd[2417]: time="2026-01-14T01:20:39.332336864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:20:39.333720 kubelet[3936]: E0114 01:20:39.333689 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:39.607884 containerd[2417]: time="2026-01-14T01:20:39.607556377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:39.610488 containerd[2417]: time="2026-01-14T01:20:39.610461358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:20:39.610645 containerd[2417]: time="2026-01-14T01:20:39.610520160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:39.610679 kubelet[3936]: E0114 01:20:39.610626 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:39.610729 kubelet[3936]: E0114 01:20:39.610674 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:20:39.610911 kubelet[3936]: E0114 01:20:39.610778 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:773bdd7ef7d541e28728652281bd8ea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:39.612689 containerd[2417]: time="2026-01-14T01:20:39.612662962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:20:39.879278 containerd[2417]: time="2026-01-14T01:20:39.879142096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:39.883262 containerd[2417]: time="2026-01-14T01:20:39.883234738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:20:39.883327 containerd[2417]: time="2026-01-14T01:20:39.883250047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:39.883413 kubelet[3936]: E0114 01:20:39.883380 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:39.883468 kubelet[3936]: E0114 01:20:39.883422 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:20:39.883572 kubelet[3936]: E0114 01:20:39.883546 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:39.885520 kubelet[3936]: E0114 01:20:39.885474 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:20:40.048984 containerd[2417]: time="2026-01-14T01:20:40.048964899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:40.309130 containerd[2417]: time="2026-01-14T01:20:40.309104688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:40.313221 containerd[2417]: time="2026-01-14T01:20:40.313193733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:40.313274 containerd[2417]: time="2026-01-14T01:20:40.313264003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:40.313425 kubelet[3936]: E0114 01:20:40.313397 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:40.313465 kubelet[3936]: E0114 01:20:40.313431 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:40.313566 kubelet[3936]: E0114 01:20:40.313529 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:40.314795 kubelet[3936]: E0114 01:20:40.314763 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:45.050260 containerd[2417]: time="2026-01-14T01:20:45.050207600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:45.324505 containerd[2417]: time="2026-01-14T01:20:45.324287382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:45.329019 containerd[2417]: time="2026-01-14T01:20:45.328992819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:45.329072 containerd[2417]: time="2026-01-14T01:20:45.329055075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:45.329209 kubelet[3936]: E0114 01:20:45.329163 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:45.329209 kubelet[3936]: E0114 01:20:45.329203 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:45.329532 kubelet[3936]: E0114 01:20:45.329405 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mfgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:45.329891 containerd[2417]: time="2026-01-14T01:20:45.329869357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:20:45.331570 kubelet[3936]: E0114 01:20:45.331197 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:45.595972 containerd[2417]: time="2026-01-14T01:20:45.595880767Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:45.599630 containerd[2417]: time="2026-01-14T01:20:45.599593588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:20:45.599707 containerd[2417]: time="2026-01-14T01:20:45.599602977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:45.600819 kubelet[3936]: E0114 01:20:45.600750 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:20:45.600819 kubelet[3936]: E0114 01:20:45.600793 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:20:45.601128 kubelet[3936]: E0114 01:20:45.601083 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:45.602945 kubelet[3936]: E0114 01:20:45.602913 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:20:47.049975 kubelet[3936]: E0114 01:20:47.049931 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:47.053029 kubelet[3936]: E0114 01:20:47.052993 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:20:50.054115 kubelet[3936]: E0114 01:20:50.054068 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:20:51.051035 kubelet[3936]: E0114 01:20:51.050174 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:20:53.742473 waagent[2621]: 2026-01-14T01:20:53.742122Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 14 01:20:53.751664 waagent[2621]: 2026-01-14T01:20:53.751243Z INFO ExtHandler Jan 14 01:20:53.751664 waagent[2621]: 2026-01-14T01:20:53.751365Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a18d12b5-e32f-43aa-b586-b20b3f303b69 eTag: 4191398890310054546 source: Fabric] Jan 14 01:20:53.751936 waagent[2621]: 2026-01-14T01:20:53.751905Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:20:53.752580 waagent[2621]: 2026-01-14T01:20:53.752541Z INFO ExtHandler Jan 14 01:20:53.752731 waagent[2621]: 2026-01-14T01:20:53.752707Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 14 01:20:53.809645 waagent[2621]: 2026-01-14T01:20:53.809591Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:20:53.966625 waagent[2621]: 2026-01-14T01:20:53.966568Z INFO ExtHandler Downloaded certificate {'thumbprint': '385C0CE2D1026FFDDB0DC16C651D2A3AFFB14354', 'hasPrivateKey': True} Jan 14 01:20:53.967014 waagent[2621]: 2026-01-14T01:20:53.966982Z INFO ExtHandler Fetch goal state completed Jan 14 01:20:53.967299 waagent[2621]: 2026-01-14T01:20:53.967270Z INFO ExtHandler ExtHandler Jan 14 01:20:53.967348 waagent[2621]: 2026-01-14T01:20:53.967327Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 4de3bfbf-db87-49cf-8949-fe41b781d8bb correlation 9d733e6d-0d48-407d-8f5f-837a80ecd238 created: 2026-01-14T01:20:47.105957Z] Jan 14 01:20:53.967565 waagent[2621]: 2026-01-14T01:20:53.967539Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:20:53.968021 waagent[2621]: 2026-01-14T01:20:53.967994Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 14 01:20:56.051660 kubelet[3936]: E0114 01:20:56.051341 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:20:58.050660 kubelet[3936]: E0114 01:20:58.050002 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:20:59.053736 containerd[2417]: time="2026-01-14T01:20:59.051426709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:20:59.317801 containerd[2417]: time="2026-01-14T01:20:59.317652501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:59.328158 containerd[2417]: time="2026-01-14T01:20:59.328125343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:20:59.328254 containerd[2417]: time="2026-01-14T01:20:59.328191389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:59.328330 kubelet[3936]: E0114 01:20:59.328289 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:59.328598 kubelet[3936]: E0114 01:20:59.328341 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:20:59.328598 kubelet[3936]: E0114 01:20:59.328565 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:59.329113 containerd[2417]: time="2026-01-14T01:20:59.329073390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:20:59.602557 containerd[2417]: time="2026-01-14T01:20:59.602465981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:59.605550 containerd[2417]: time="2026-01-14T01:20:59.605509844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:20:59.605663 containerd[2417]: time="2026-01-14T01:20:59.605577214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:59.605735 kubelet[3936]: E0114 01:20:59.605693 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:59.605799 kubelet[3936]: E0114 01:20:59.605748 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:20:59.606093 containerd[2417]: time="2026-01-14T01:20:59.606005078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:20:59.606318 kubelet[3936]: E0114 01:20:59.606138 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-689q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:59.607544 kubelet[3936]: E0114 01:20:59.607485 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:20:59.873347 containerd[2417]: time="2026-01-14T01:20:59.873168576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:20:59.876989 containerd[2417]: time="2026-01-14T01:20:59.876871969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:20:59.876989 containerd[2417]: time="2026-01-14T01:20:59.876958712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:20:59.877095 kubelet[3936]: E0114 01:20:59.877057 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:59.877137 kubelet[3936]: E0114 01:20:59.877097 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:20:59.877651 kubelet[3936]: E0114 01:20:59.877206 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:20:59.878434 kubelet[3936]: E0114 01:20:59.878388 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:21:01.050696 kubelet[3936]: E0114 01:21:01.050345 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:21:01.051707 containerd[2417]: time="2026-01-14T01:21:01.051567919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:21:01.326108 containerd[2417]: time="2026-01-14T01:21:01.325798318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:01.329278 containerd[2417]: time="2026-01-14T01:21:01.329154723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:21:01.329278 containerd[2417]: time="2026-01-14T01:21:01.329243750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:01.329421 kubelet[3936]: E0114 01:21:01.329355 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:01.329421 kubelet[3936]: E0114 01:21:01.329400 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:01.330411 kubelet[3936]: E0114 01:21:01.329512 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:773bdd7ef7d541e28728652281bd8ea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:01.332216 containerd[2417]: time="2026-01-14T01:21:01.332016072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:21:01.622843 containerd[2417]: time="2026-01-14T01:21:01.622676903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:01.626463 containerd[2417]: time="2026-01-14T01:21:01.626370787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:21:01.626463 containerd[2417]: time="2026-01-14T01:21:01.626405956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:01.626746 kubelet[3936]: E0114 01:21:01.626709 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:01.626814 kubelet[3936]: E0114 01:21:01.626753 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:01.626892 kubelet[3936]: E0114 01:21:01.626865 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:01.629142 kubelet[3936]: E0114 01:21:01.628620 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:21:06.050577 containerd[2417]: time="2026-01-14T01:21:06.050534907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:21:06.320799 containerd[2417]: time="2026-01-14T01:21:06.320601840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:06.329688 containerd[2417]: time="2026-01-14T01:21:06.329561489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:21:06.329688 containerd[2417]: time="2026-01-14T01:21:06.329660176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:06.330510 kubelet[3936]: E0114 01:21:06.329898 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:06.330510 kubelet[3936]: E0114 01:21:06.329939 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:06.330510 kubelet[3936]: E0114 01:21:06.330084 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnsz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:06.331212 kubelet[3936]: E0114 01:21:06.331185 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:21:10.049339 containerd[2417]: time="2026-01-14T01:21:10.049296297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:10.317534 containerd[2417]: time="2026-01-14T01:21:10.317416853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:10.323844 containerd[2417]: time="2026-01-14T01:21:10.323814855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:10.323934 containerd[2417]: time="2026-01-14T01:21:10.323876546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:10.324035 kubelet[3936]: E0114 01:21:10.323977 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:10.324470 kubelet[3936]: E0114 01:21:10.324043 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:10.324470 kubelet[3936]: E0114 01:21:10.324188 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:10.325371 kubelet[3936]: E0114 01:21:10.325333 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:21:10.907622 systemd[1]: Started sshd@7-10.200.4.14:22-10.200.16.10:40034.service - OpenSSH per-connection server daemon (10.200.16.10:40034). Jan 14 01:21:10.914260 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 14 01:21:10.914373 kernel: audit: type=1130 audit(1768353670.907:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.14:22-10.200.16.10:40034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:10.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.14:22-10.200.16.10:40034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:11.465000 audit[6417]: USER_ACCT pid=6417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.466234 sshd[6417]: Accepted publickey for core from 10.200.16.10 port 40034 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:11.468540 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:11.467000 audit[6417]: CRED_ACQ pid=6417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.476808 kernel: audit: type=1101 audit(1768353671.465:799): pid=6417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.476872 kernel: audit: type=1103 audit(1768353671.467:800): pid=6417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.482180 kernel: audit: type=1006 audit(1768353671.467:801): pid=6417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:21:11.481908 systemd-logind[2405]: New session 11 of user core. Jan 14 01:21:11.467000 audit[6417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd80f43fc0 a2=3 a3=0 items=0 ppid=1 pid=6417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.489706 kernel: audit: type=1300 audit(1768353671.467:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd80f43fc0 a2=3 a3=0 items=0 ppid=1 pid=6417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.467000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:11.493194 kernel: audit: type=1327 audit(1768353671.467:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:11.495824 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:21:11.497000 audit[6417]: USER_START pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.504658 kernel: audit: type=1105 audit(1768353671.497:802): pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.504000 audit[6421]: CRED_ACQ pid=6421 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.513660 kernel: audit: type=1103 audit(1768353671.504:803): pid=6421 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.823608 sshd[6421]: Connection closed by 10.200.16.10 port 40034 Jan 14 01:21:11.824776 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:11.825000 audit[6417]: USER_END pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.832708 kernel: audit: type=1106 audit(1768353671.825:804): pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.833408 systemd[1]: sshd@7-10.200.4.14:22-10.200.16.10:40034.service: Deactivated successfully. Jan 14 01:21:11.825000 audit[6417]: CRED_DISP pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.841700 kernel: audit: type=1104 audit(1768353671.825:805): pid=6417 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:11.836814 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:21:11.840801 systemd-logind[2405]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:21:11.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.14:22-10.200.16.10:40034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:11.843120 systemd-logind[2405]: Removed session 11. Jan 14 01:21:12.050907 kubelet[3936]: E0114 01:21:12.049875 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:21:12.052077 containerd[2417]: time="2026-01-14T01:21:12.052027566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:21:12.319155 containerd[2417]: time="2026-01-14T01:21:12.319112623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:12.322753 containerd[2417]: time="2026-01-14T01:21:12.322720549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:12.322870 containerd[2417]: time="2026-01-14T01:21:12.322744712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:21:12.323034 kubelet[3936]: E0114 01:21:12.323006 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:12.323133 kubelet[3936]: E0114 01:21:12.323116 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:12.323736 kubelet[3936]: E0114 01:21:12.323676 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:12.325067 kubelet[3936]: E0114 01:21:12.325025 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:21:13.050923 containerd[2417]: time="2026-01-14T01:21:13.050866328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:13.315808 containerd[2417]: time="2026-01-14T01:21:13.315679768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:13.319654 containerd[2417]: time="2026-01-14T01:21:13.319518053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:13.319654 containerd[2417]: time="2026-01-14T01:21:13.319556247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:13.320158 kubelet[3936]: E0114 01:21:13.319937 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:13.320158 kubelet[3936]: E0114 01:21:13.320106 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:13.320967 kubelet[3936]: E0114 01:21:13.320698 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mfgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:13.322277 kubelet[3936]: E0114 01:21:13.322143 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:21:14.050544 kubelet[3936]: E0114 01:21:14.050503 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:21:14.051777 kubelet[3936]: E0114 01:21:14.050870 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:21:16.945084 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:21:16.945185 kernel: audit: type=1130 audit(1768353676.936:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.14:22-10.200.16.10:40044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:16.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.14:22-10.200.16.10:40044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:16.936680 systemd[1]: Started sshd@8-10.200.4.14:22-10.200.16.10:40044.service - OpenSSH per-connection server daemon (10.200.16.10:40044). Jan 14 01:21:17.480000 audit[6462]: USER_ACCT pid=6462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.484368 sshd-session[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:17.487311 sshd[6462]: Accepted publickey for core from 10.200.16.10 port 40044 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:17.489211 kernel: audit: type=1101 audit(1768353677.480:808): pid=6462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.489289 kernel: audit: type=1103 audit(1768353677.481:809): pid=6462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.481000 audit[6462]: CRED_ACQ pid=6462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.501877 systemd-logind[2405]: New session 12 of user core. Jan 14 01:21:17.510840 kernel: audit: type=1006 audit(1768353677.481:810): pid=6462 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:21:17.510913 kernel: audit: type=1300 audit(1768353677.481:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2419be30 a2=3 a3=0 items=0 ppid=1 pid=6462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:17.481000 audit[6462]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2419be30 a2=3 a3=0 items=0 ppid=1 pid=6462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:17.525280 kernel: audit: type=1327 audit(1768353677.481:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:17.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:17.525957 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:21:17.530000 audit[6462]: USER_START pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.541682 kernel: audit: type=1105 audit(1768353677.530:811): pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.540000 audit[6466]: CRED_ACQ pid=6466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.551658 kernel: audit: type=1103 audit(1768353677.540:812): pid=6466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.859785 sshd[6466]: Connection closed by 10.200.16.10 port 40044 Jan 14 01:21:17.860365 sshd-session[6462]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:17.861000 audit[6462]: USER_END pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.861000 audit[6462]: CRED_DISP pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.866976 systemd[1]: sshd@8-10.200.4.14:22-10.200.16.10:40044.service: Deactivated successfully. Jan 14 01:21:17.870383 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:21:17.871976 kernel: audit: type=1106 audit(1768353677.861:813): pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.872032 kernel: audit: type=1104 audit(1768353677.861:814): pid=6462 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:17.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.14:22-10.200.16.10:40044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:17.874109 systemd-logind[2405]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:21:17.875280 systemd-logind[2405]: Removed session 12. Jan 14 01:21:18.049394 kubelet[3936]: E0114 01:21:18.049320 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:21:22.050569 kubelet[3936]: E0114 01:21:22.050172 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:21:22.973285 systemd[1]: Started sshd@9-10.200.4.14:22-10.200.16.10:37920.service - OpenSSH per-connection server daemon (10.200.16.10:37920). Jan 14 01:21:22.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.14:22-10.200.16.10:37920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:22.975273 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:21:22.975435 kernel: audit: type=1130 audit(1768353682.972:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.14:22-10.200.16.10:37920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:23.051646 kubelet[3936]: E0114 01:21:23.051593 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:21:23.523000 audit[6478]: USER_ACCT pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.529498 sshd[6478]: Accepted publickey for core from 10.200.16.10 port 37920 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:23.529804 kernel: audit: type=1101 audit(1768353683.523:817): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.529000 audit[6478]: CRED_ACQ pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.533338 sshd-session[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:23.537817 kernel: audit: type=1103 audit(1768353683.529:818): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.537938 kernel: audit: type=1006 audit(1768353683.529:819): pid=6478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:21:23.529000 audit[6478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc2d3a610 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.542662 kernel: audit: type=1300 audit(1768353683.529:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc2d3a610 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:23.546662 kernel: audit: type=1327 audit(1768353683.529:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:23.548748 systemd-logind[2405]: New session 13 of user core. Jan 14 01:21:23.553820 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:21:23.557000 audit[6478]: USER_START pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.564000 audit[6482]: CRED_ACQ pid=6482 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.567943 kernel: audit: type=1105 audit(1768353683.557:820): pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.568068 kernel: audit: type=1103 audit(1768353683.564:821): pid=6482 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.915418 sshd[6482]: Connection closed by 10.200.16.10 port 37920 Jan 14 01:21:23.916123 sshd-session[6478]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:23.919000 audit[6478]: USER_END pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.927659 kernel: audit: type=1106 audit(1768353683.919:822): pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.930179 systemd[1]: sshd@9-10.200.4.14:22-10.200.16.10:37920.service: Deactivated successfully. Jan 14 01:21:23.931901 systemd-logind[2405]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:21:23.932811 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:21:23.919000 audit[6478]: CRED_DISP pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.935296 systemd-logind[2405]: Removed session 13. Jan 14 01:21:23.940665 kernel: audit: type=1104 audit(1768353683.919:823): pid=6478 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:23.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.14:22-10.200.16.10:37920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:24.031415 systemd[1]: Started sshd@10-10.200.4.14:22-10.200.16.10:37924.service - OpenSSH per-connection server daemon (10.200.16.10:37924). Jan 14 01:21:24.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.14:22-10.200.16.10:37924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:24.592000 audit[6495]: USER_ACCT pid=6495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.593885 sshd[6495]: Accepted publickey for core from 10.200.16.10 port 37924 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:24.594000 audit[6495]: CRED_ACQ pid=6495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.594000 audit[6495]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd21e12510 a2=3 a3=0 items=0 ppid=1 pid=6495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:24.594000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:24.595522 sshd-session[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:24.600251 systemd-logind[2405]: New session 14 of user core. Jan 14 01:21:24.605829 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:21:24.608000 audit[6495]: USER_START pid=6495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.609000 audit[6499]: CRED_ACQ pid=6499 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.981739 sshd[6499]: Connection closed by 10.200.16.10 port 37924 Jan 14 01:21:24.982278 sshd-session[6495]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:24.984000 audit[6495]: USER_END pid=6495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.984000 audit[6495]: CRED_DISP pid=6495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:24.987420 systemd[1]: sshd@10-10.200.4.14:22-10.200.16.10:37924.service: Deactivated successfully. Jan 14 01:21:24.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.14:22-10.200.16.10:37924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:24.990360 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:21:24.992793 systemd-logind[2405]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:21:24.994525 systemd-logind[2405]: Removed session 14. Jan 14 01:21:25.051237 kubelet[3936]: E0114 01:21:25.051112 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:21:25.053781 kubelet[3936]: E0114 01:21:25.053744 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:21:25.053972 kubelet[3936]: E0114 01:21:25.053586 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:21:25.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.14:22-10.200.16.10:37928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:25.099719 systemd[1]: Started sshd@11-10.200.4.14:22-10.200.16.10:37928.service - OpenSSH per-connection server daemon (10.200.16.10:37928). Jan 14 01:21:25.664000 audit[6509]: USER_ACCT pid=6509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:25.666497 sshd[6509]: Accepted publickey for core from 10.200.16.10 port 37928 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:25.665000 audit[6509]: CRED_ACQ pid=6509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:25.665000 audit[6509]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8507e860 a2=3 a3=0 items=0 ppid=1 pid=6509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:25.665000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:25.668251 sshd-session[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:25.677903 systemd-logind[2405]: New session 15 of user core. Jan 14 01:21:25.681846 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:21:25.683000 audit[6509]: USER_START pid=6509 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:25.685000 audit[6513]: CRED_ACQ pid=6513 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:26.018570 sshd[6513]: Connection closed by 10.200.16.10 port 37928 Jan 14 01:21:26.019791 sshd-session[6509]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:26.019000 audit[6509]: USER_END pid=6509 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:26.019000 audit[6509]: CRED_DISP pid=6509 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:26.023535 systemd[1]: sshd@11-10.200.4.14:22-10.200.16.10:37928.service: Deactivated successfully. Jan 14 01:21:26.023711 systemd-logind[2405]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:21:26.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.14:22-10.200.16.10:37928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:26.025545 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:21:26.027337 systemd-logind[2405]: Removed session 15. Jan 14 01:21:26.049701 kubelet[3936]: E0114 01:21:26.049657 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:21:31.153029 systemd[1]: Started sshd@12-10.200.4.14:22-10.200.16.10:56238.service - OpenSSH per-connection server daemon (10.200.16.10:56238). Jan 14 01:21:31.160362 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:21:31.160436 kernel: audit: type=1130 audit(1768353691.151:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.14:22-10.200.16.10:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:31.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.14:22-10.200.16.10:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:31.698000 audit[6532]: USER_ACCT pid=6532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.705607 sshd[6532]: Accepted publickey for core from 10.200.16.10 port 56238 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:31.705912 kernel: audit: type=1101 audit(1768353691.698:844): pid=6532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.705823 sshd-session[6532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:31.702000 audit[6532]: CRED_ACQ pid=6532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.716666 kernel: audit: type=1103 audit(1768353691.702:845): pid=6532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.722655 kernel: audit: type=1006 audit(1768353691.702:846): pid=6532 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:21:31.702000 audit[6532]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6d14af30 a2=3 a3=0 items=0 ppid=1 pid=6532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:31.723616 systemd-logind[2405]: New session 16 of user core. Jan 14 01:21:31.729654 kernel: audit: type=1300 audit(1768353691.702:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6d14af30 a2=3 a3=0 items=0 ppid=1 pid=6532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:31.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:31.730835 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:21:31.733808 kernel: audit: type=1327 audit(1768353691.702:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:31.736000 audit[6532]: USER_START pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.745427 kernel: audit: type=1105 audit(1768353691.736:847): pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.745000 audit[6536]: CRED_ACQ pid=6536 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:31.754691 kernel: audit: type=1103 audit(1768353691.745:848): pid=6536 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:32.078909 sshd[6536]: Connection closed by 10.200.16.10 port 56238 Jan 14 01:21:32.080814 sshd-session[6532]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:32.080000 audit[6532]: USER_END pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:32.087817 systemd-logind[2405]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:21:32.089662 kernel: audit: type=1106 audit(1768353692.080:849): pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:32.088541 systemd[1]: sshd@12-10.200.4.14:22-10.200.16.10:56238.service: Deactivated successfully. Jan 14 01:21:32.092274 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:21:32.080000 audit[6532]: CRED_DISP pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:32.098397 systemd-logind[2405]: Removed session 16. Jan 14 01:21:32.100650 kernel: audit: type=1104 audit(1768353692.080:850): pid=6532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:32.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.14:22-10.200.16.10:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.049721 kubelet[3936]: E0114 01:21:33.049676 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:21:36.050238 kubelet[3936]: E0114 01:21:36.050183 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:21:37.197847 systemd[1]: Started sshd@13-10.200.4.14:22-10.200.16.10:56252.service - OpenSSH per-connection server daemon (10.200.16.10:56252). Jan 14 01:21:37.206691 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:21:37.206779 kernel: audit: type=1130 audit(1768353697.197:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.14:22-10.200.16.10:56252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:37.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.14:22-10.200.16.10:56252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:37.744000 audit[6549]: USER_ACCT pid=6549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.748744 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:37.750098 sshd[6549]: Accepted publickey for core from 10.200.16.10 port 56252 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:37.744000 audit[6549]: CRED_ACQ pid=6549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.757474 kernel: audit: type=1101 audit(1768353697.744:853): pid=6549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.757546 kernel: audit: type=1103 audit(1768353697.744:854): pid=6549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.763664 kernel: audit: type=1006 audit(1768353697.744:855): pid=6549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:21:37.761579 systemd-logind[2405]: New session 17 of user core. Jan 14 01:21:37.744000 audit[6549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd2a8770 a2=3 a3=0 items=0 ppid=1 pid=6549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.769011 kernel: audit: type=1300 audit(1768353697.744:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd2a8770 a2=3 a3=0 items=0 ppid=1 pid=6549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:37.772019 kernel: audit: type=1327 audit(1768353697.744:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:37.773836 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:21:37.775000 audit[6549]: USER_START pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.782693 kernel: audit: type=1105 audit(1768353697.775:856): pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.779000 audit[6553]: CRED_ACQ pid=6553 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:37.788666 kernel: audit: type=1103 audit(1768353697.779:857): pid=6553 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:38.050331 kubelet[3936]: E0114 01:21:38.050230 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:21:38.050734 kubelet[3936]: E0114 01:21:38.050624 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:21:38.146845 sshd[6553]: Connection closed by 10.200.16.10 port 56252 Jan 14 01:21:38.148829 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:38.149000 audit[6549]: USER_END pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:38.160656 kernel: audit: type=1106 audit(1768353698.149:858): pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:38.159983 systemd[1]: sshd@13-10.200.4.14:22-10.200.16.10:56252.service: Deactivated successfully. Jan 14 01:21:38.149000 audit[6549]: CRED_DISP pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:38.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.14:22-10.200.16.10:56252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:38.166155 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:21:38.166689 kernel: audit: type=1104 audit(1768353698.149:859): pid=6549 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:38.168894 systemd-logind[2405]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:21:38.170363 systemd-logind[2405]: Removed session 17. Jan 14 01:21:39.049865 kubelet[3936]: E0114 01:21:39.049794 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:21:40.049710 kubelet[3936]: E0114 01:21:40.049571 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:21:40.050981 containerd[2417]: time="2026-01-14T01:21:40.050937974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:21:40.319948 containerd[2417]: time="2026-01-14T01:21:40.319660515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:40.326092 containerd[2417]: time="2026-01-14T01:21:40.326034511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:21:40.326092 containerd[2417]: time="2026-01-14T01:21:40.326064331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:40.326281 kubelet[3936]: E0114 01:21:40.326245 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:40.326347 kubelet[3936]: E0114 01:21:40.326295 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:40.326588 kubelet[3936]: E0114 01:21:40.326423 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:40.328676 containerd[2417]: time="2026-01-14T01:21:40.328650029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:21:40.590161 containerd[2417]: time="2026-01-14T01:21:40.590039624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:40.593396 containerd[2417]: time="2026-01-14T01:21:40.593352782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:21:40.593505 containerd[2417]: time="2026-01-14T01:21:40.593452267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:40.595159 kubelet[3936]: E0114 01:21:40.595114 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:40.595261 kubelet[3936]: E0114 01:21:40.595172 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:40.595333 kubelet[3936]: E0114 01:21:40.595296 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxvs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bg7tj_calico-system(0ad549b6-0df1-4bac-8f3a-1bc2943edac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:40.596730 kubelet[3936]: E0114 01:21:40.596692 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:21:43.272679 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:21:43.272773 kernel: audit: type=1130 audit(1768353703.264:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.14:22-10.200.16.10:35814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:43.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.14:22-10.200.16.10:35814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:43.265261 systemd[1]: Started sshd@14-10.200.4.14:22-10.200.16.10:35814.service - OpenSSH per-connection server daemon (10.200.16.10:35814). Jan 14 01:21:43.823000 audit[6573]: USER_ACCT pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.824391 sshd[6573]: Accepted publickey for core from 10.200.16.10 port 35814 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:43.826668 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:43.832662 kernel: audit: type=1101 audit(1768353703.823:862): pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.825000 audit[6573]: CRED_ACQ pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.834584 systemd-logind[2405]: New session 18 of user core. Jan 14 01:21:43.845520 kernel: audit: type=1103 audit(1768353703.825:863): pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.845593 kernel: audit: type=1006 audit(1768353703.825:864): pid=6573 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:21:43.825000 audit[6573]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3fd12920 a2=3 a3=0 items=0 ppid=1 pid=6573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.851114 kernel: audit: type=1300 audit(1768353703.825:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3fd12920 a2=3 a3=0 items=0 ppid=1 pid=6573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.851833 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:21:43.825000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:43.855173 kernel: audit: type=1327 audit(1768353703.825:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:43.859963 kernel: audit: type=1105 audit(1768353703.855:865): pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.855000 audit[6573]: USER_START pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.859000 audit[6577]: CRED_ACQ pid=6577 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:43.865020 kernel: audit: type=1103 audit(1768353703.859:866): pid=6577 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.187307 sshd[6577]: Connection closed by 10.200.16.10 port 35814 Jan 14 01:21:44.187946 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:44.190000 audit[6573]: USER_END pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.197607 systemd[1]: sshd@14-10.200.4.14:22-10.200.16.10:35814.service: Deactivated successfully. Jan 14 01:21:44.190000 audit[6573]: CRED_DISP pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.201325 kernel: audit: type=1106 audit(1768353704.190:867): pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.201374 kernel: audit: type=1104 audit(1768353704.190:868): pid=6573 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.14:22-10.200.16.10:35814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:44.205115 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:21:44.206680 systemd-logind[2405]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:21:44.208532 systemd-logind[2405]: Removed session 18. Jan 14 01:21:44.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.14:22-10.200.16.10:35820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:44.303842 systemd[1]: Started sshd@15-10.200.4.14:22-10.200.16.10:35820.service - OpenSSH per-connection server daemon (10.200.16.10:35820). Jan 14 01:21:44.859000 audit[6588]: USER_ACCT pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.859988 sshd[6588]: Accepted publickey for core from 10.200.16.10 port 35820 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:44.860000 audit[6588]: CRED_ACQ pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.860000 audit[6588]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc913da280 a2=3 a3=0 items=0 ppid=1 pid=6588 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:44.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:44.862172 sshd-session[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:44.867738 systemd-logind[2405]: New session 19 of user core. Jan 14 01:21:44.875745 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:21:44.878000 audit[6588]: USER_START pid=6588 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:44.879000 audit[6592]: CRED_ACQ pid=6592 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.050945 kubelet[3936]: E0114 01:21:45.050017 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:21:45.235278 sshd[6592]: Connection closed by 10.200.16.10 port 35820 Jan 14 01:21:45.234657 sshd-session[6588]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:45.236000 audit[6588]: USER_END pid=6588 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.236000 audit[6588]: CRED_DISP pid=6588 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.240087 systemd[1]: sshd@15-10.200.4.14:22-10.200.16.10:35820.service: Deactivated successfully. Jan 14 01:21:45.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.14:22-10.200.16.10:35820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:45.241933 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:21:45.242950 systemd-logind[2405]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:21:45.244258 systemd-logind[2405]: Removed session 19. Jan 14 01:21:45.346993 systemd[1]: Started sshd@16-10.200.4.14:22-10.200.16.10:35830.service - OpenSSH per-connection server daemon (10.200.16.10:35830). Jan 14 01:21:45.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.14:22-10.200.16.10:35830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:45.884000 audit[6603]: USER_ACCT pid=6603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.886145 sshd[6603]: Accepted publickey for core from 10.200.16.10 port 35830 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:45.886000 audit[6603]: CRED_ACQ pid=6603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.886000 audit[6603]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeacf58070 a2=3 a3=0 items=0 ppid=1 pid=6603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:45.887631 sshd-session[6603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:45.892663 systemd-logind[2405]: New session 20 of user core. Jan 14 01:21:45.898946 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:21:45.901000 audit[6603]: USER_START pid=6603 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:45.903000 audit[6607]: CRED_ACQ pid=6607 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:46.797000 audit[6639]: NETFILTER_CFG table=filter:155 family=2 entries=26 op=nft_register_rule pid=6639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.797000 audit[6639]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffebc86c720 a2=0 a3=7ffebc86c70c items=0 ppid=4040 pid=6639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.797000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.806000 audit[6639]: NETFILTER_CFG table=nat:156 family=2 entries=20 op=nft_register_rule pid=6639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.806000 audit[6639]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffebc86c720 a2=0 a3=0 items=0 ppid=4040 pid=6639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.835000 audit[6642]: NETFILTER_CFG table=filter:157 family=2 entries=38 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.835000 audit[6642]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc18d463e0 a2=0 a3=7ffc18d463cc items=0 ppid=4040 pid=6642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.848000 audit[6642]: NETFILTER_CFG table=nat:158 family=2 entries=20 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.848000 audit[6642]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc18d463e0 a2=0 a3=0 items=0 ppid=4040 pid=6642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.889654 sshd[6607]: Connection closed by 10.200.16.10 port 35830 Jan 14 01:21:46.890212 sshd-session[6603]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:46.891000 audit[6603]: USER_END pid=6603 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:46.891000 audit[6603]: CRED_DISP pid=6603 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:46.895693 systemd-logind[2405]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:21:46.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.14:22-10.200.16.10:35830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:46.896057 systemd[1]: sshd@16-10.200.4.14:22-10.200.16.10:35830.service: Deactivated successfully. Jan 14 01:21:46.899342 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:21:46.903738 systemd-logind[2405]: Removed session 20. Jan 14 01:21:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.14:22-10.200.16.10:35836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:46.998675 systemd[1]: Started sshd@17-10.200.4.14:22-10.200.16.10:35836.service - OpenSSH per-connection server daemon (10.200.16.10:35836). Jan 14 01:21:47.536407 sshd[6647]: Accepted publickey for core from 10.200.16.10 port 35836 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:47.535000 audit[6647]: USER_ACCT pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:47.539773 sshd-session[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:47.538000 audit[6647]: CRED_ACQ pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:47.538000 audit[6647]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2ca4de70 a2=3 a3=0 items=0 ppid=1 pid=6647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:47.538000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:47.548705 systemd-logind[2405]: New session 21 of user core. Jan 14 01:21:47.552830 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:21:47.556000 audit[6647]: USER_START pid=6647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:47.559000 audit[6651]: CRED_ACQ pid=6651 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.093665 sshd[6651]: Connection closed by 10.200.16.10 port 35836 Jan 14 01:21:48.095816 sshd-session[6647]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:48.096000 audit[6647]: USER_END pid=6647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.096000 audit[6647]: CRED_DISP pid=6647 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.102943 systemd[1]: sshd@17-10.200.4.14:22-10.200.16.10:35836.service: Deactivated successfully. Jan 14 01:21:48.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.14:22-10.200.16.10:35836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:48.107569 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:21:48.110589 systemd-logind[2405]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:21:48.113339 systemd-logind[2405]: Removed session 21. Jan 14 01:21:48.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.14:22-10.200.16.10:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:48.206906 systemd[1]: Started sshd@18-10.200.4.14:22-10.200.16.10:35850.service - OpenSSH per-connection server daemon (10.200.16.10:35850). Jan 14 01:21:48.752000 audit[6661]: USER_ACCT pid=6661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.760616 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 14 01:21:48.760728 kernel: audit: type=1101 audit(1768353708.752:902): pid=6661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.759571 sshd-session[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:48.760986 sshd[6661]: Accepted publickey for core from 10.200.16.10 port 35850 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:48.756000 audit[6661]: CRED_ACQ pid=6661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.771048 kernel: audit: type=1103 audit(1768353708.756:903): pid=6661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.771127 kernel: audit: type=1006 audit(1768353708.756:904): pid=6661 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:21:48.770629 systemd-logind[2405]: New session 22 of user core. Jan 14 01:21:48.756000 audit[6661]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1df6e330 a2=3 a3=0 items=0 ppid=1 pid=6661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:48.772974 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:21:48.778652 kernel: audit: type=1300 audit(1768353708.756:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1df6e330 a2=3 a3=0 items=0 ppid=1 pid=6661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:48.756000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:48.777000 audit[6661]: USER_START pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.786095 kernel: audit: type=1327 audit(1768353708.756:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:48.786135 kernel: audit: type=1105 audit(1768353708.777:905): pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.779000 audit[6665]: CRED_ACQ pid=6665 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:48.793705 kernel: audit: type=1103 audit(1768353708.779:906): pid=6665 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:49.051854 containerd[2417]: time="2026-01-14T01:21:49.051716791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:49.109661 sshd[6665]: Connection closed by 10.200.16.10 port 35850 Jan 14 01:21:49.110130 sshd-session[6661]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:49.109000 audit[6661]: USER_END pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:49.114131 systemd[1]: sshd@18-10.200.4.14:22-10.200.16.10:35850.service: Deactivated successfully. Jan 14 01:21:49.117050 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:21:49.119380 systemd-logind[2405]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:21:49.120519 systemd-logind[2405]: Removed session 22. Jan 14 01:21:49.126276 kernel: audit: type=1106 audit(1768353709.109:907): pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:49.126368 kernel: audit: type=1104 audit(1768353709.109:908): pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:49.109000 audit[6661]: CRED_DISP pid=6661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:49.129671 kernel: audit: type=1131 audit(1768353709.109:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.14:22-10.200.16.10:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:49.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.14:22-10.200.16.10:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:49.321958 containerd[2417]: time="2026-01-14T01:21:49.321756860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:49.325673 containerd[2417]: time="2026-01-14T01:21:49.325540642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:49.325673 containerd[2417]: time="2026-01-14T01:21:49.325644828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:49.325990 kubelet[3936]: E0114 01:21:49.325910 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:49.325990 kubelet[3936]: E0114 01:21:49.325974 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:49.328108 kubelet[3936]: E0114 01:21:49.328060 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-689q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-mp9tk_calico-apiserver(f6926f72-9c01-4e67-abef-2eb546c46570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:49.329447 kubelet[3936]: E0114 01:21:49.329406 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:21:50.049207 kubelet[3936]: E0114 01:21:50.049124 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:21:51.050811 kubelet[3936]: E0114 01:21:51.050774 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:21:51.051718 kubelet[3936]: E0114 01:21:51.051675 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:21:53.051827 containerd[2417]: time="2026-01-14T01:21:53.051775798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:21:53.320022 containerd[2417]: time="2026-01-14T01:21:53.319895114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:53.325859 containerd[2417]: time="2026-01-14T01:21:53.325825431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:21:53.325978 containerd[2417]: time="2026-01-14T01:21:53.325906445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:53.326096 kubelet[3936]: E0114 01:21:53.326043 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:53.326410 kubelet[3936]: E0114 01:21:53.326110 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:53.326410 kubelet[3936]: E0114 01:21:53.326247 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:773bdd7ef7d541e28728652281bd8ea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:53.328911 containerd[2417]: time="2026-01-14T01:21:53.328882518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:21:53.598932 containerd[2417]: time="2026-01-14T01:21:53.598572991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:53.602831 containerd[2417]: time="2026-01-14T01:21:53.602762022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:21:53.602831 containerd[2417]: time="2026-01-14T01:21:53.602812767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:53.603001 kubelet[3936]: E0114 01:21:53.602968 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:53.603067 kubelet[3936]: E0114 01:21:53.603013 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:53.603174 kubelet[3936]: E0114 01:21:53.603131 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69445f69fb-5cr2q_calico-system(ed6d8542-dfd5-4ecd-928d-cf86db3537f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:53.604659 kubelet[3936]: E0114 01:21:53.604596 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:21:54.051386 containerd[2417]: time="2026-01-14T01:21:54.051134855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:54.238726 kernel: audit: type=1130 audit(1768353714.227:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.14:22-10.200.16.10:34486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:54.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.14:22-10.200.16.10:34486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:54.228920 systemd[1]: Started sshd@19-10.200.4.14:22-10.200.16.10:34486.service - OpenSSH per-connection server daemon (10.200.16.10:34486). Jan 14 01:21:54.329465 containerd[2417]: time="2026-01-14T01:21:54.329359701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:54.333179 containerd[2417]: time="2026-01-14T01:21:54.333139018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:54.333276 containerd[2417]: time="2026-01-14T01:21:54.333242549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:54.333512 kubelet[3936]: E0114 01:21:54.333473 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:54.334249 kubelet[3936]: E0114 01:21:54.333804 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:54.334417 kubelet[3936]: E0114 01:21:54.334382 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mfgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8778f546-8gqr9_calico-apiserver(73f6bb79-7f15-4fdc-bde4-bdf058188aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:54.335818 kubelet[3936]: E0114 01:21:54.335786 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:21:54.802000 audit[6686]: USER_ACCT pid=6686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.805949 sshd-session[6686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:21:54.807052 sshd[6686]: Accepted publickey for core from 10.200.16.10 port 34486 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:21:54.803000 audit[6686]: CRED_ACQ pid=6686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.815163 kernel: audit: type=1101 audit(1768353714.802:911): pid=6686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.815226 kernel: audit: type=1103 audit(1768353714.803:912): pid=6686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.817812 systemd-logind[2405]: New session 23 of user core. Jan 14 01:21:54.819945 kernel: audit: type=1006 audit(1768353714.803:913): pid=6686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:21:54.803000 audit[6686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdce379750 a2=3 a3=0 items=0 ppid=1 pid=6686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:54.825339 kernel: audit: type=1300 audit(1768353714.803:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdce379750 a2=3 a3=0 items=0 ppid=1 pid=6686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:54.825842 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:21:54.803000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:54.829497 kernel: audit: type=1327 audit(1768353714.803:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:21:54.831237 kernel: audit: type=1105 audit(1768353714.829:914): pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.829000 audit[6686]: USER_START pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.834000 audit[6705]: CRED_ACQ pid=6705 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:54.846651 kernel: audit: type=1103 audit(1768353714.834:915): pid=6705 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:55.162127 sshd[6705]: Connection closed by 10.200.16.10 port 34486 Jan 14 01:21:55.162744 sshd-session[6686]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:55.163000 audit[6686]: USER_END pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:55.168306 systemd[1]: sshd@19-10.200.4.14:22-10.200.16.10:34486.service: Deactivated successfully. Jan 14 01:21:55.170803 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:21:55.175087 kernel: audit: type=1106 audit(1768353715.163:916): pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:55.174523 systemd-logind[2405]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:21:55.163000 audit[6686]: CRED_DISP pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:55.175946 systemd-logind[2405]: Removed session 23. Jan 14 01:21:55.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.14:22-10.200.16.10:34486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:55.181832 kernel: audit: type=1104 audit(1768353715.163:917): pid=6686 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:21:56.049298 containerd[2417]: time="2026-01-14T01:21:56.049252677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:21:56.308444 containerd[2417]: time="2026-01-14T01:21:56.308194399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:56.311506 containerd[2417]: time="2026-01-14T01:21:56.311351377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:21:56.311506 containerd[2417]: time="2026-01-14T01:21:56.311454934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:56.312224 kubelet[3936]: E0114 01:21:56.311787 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:56.312224 kubelet[3936]: E0114 01:21:56.311838 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:56.312224 kubelet[3936]: E0114 01:21:56.311980 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnsz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f77f5cb44-nf9jt_calico-system(1876e14b-df10-499c-9b9b-1ece31d0136a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:56.313559 kubelet[3936]: E0114 01:21:56.313505 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:22:00.289344 systemd[1]: Started sshd@20-10.200.4.14:22-10.200.16.10:60994.service - OpenSSH per-connection server daemon (10.200.16.10:60994). Jan 14 01:22:00.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.14:22-10.200.16.10:60994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:00.292104 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:00.292194 kernel: audit: type=1130 audit(1768353720.288:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.14:22-10.200.16.10:60994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:00.842000 audit[6717]: USER_ACCT pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.849821 kernel: audit: type=1101 audit(1768353720.842:920): pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.849979 sshd[6717]: Accepted publickey for core from 10.200.16.10 port 60994 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:00.851513 sshd-session[6717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:00.849000 audit[6717]: CRED_ACQ pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.859676 kernel: audit: type=1103 audit(1768353720.849:921): pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.861849 systemd-logind[2405]: New session 24 of user core. Jan 14 01:22:00.867657 kernel: audit: type=1006 audit(1768353720.849:922): pid=6717 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:22:00.849000 audit[6717]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3ea336a0 a2=3 a3=0 items=0 ppid=1 pid=6717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.875667 kernel: audit: type=1300 audit(1768353720.849:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3ea336a0 a2=3 a3=0 items=0 ppid=1 pid=6717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.868843 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:22:00.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:00.879691 kernel: audit: type=1327 audit(1768353720.849:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:00.880000 audit[6717]: USER_START pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.886000 audit[6721]: CRED_ACQ pid=6721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.896427 kernel: audit: type=1105 audit(1768353720.880:923): pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:00.896513 kernel: audit: type=1103 audit(1768353720.886:924): pid=6721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:01.054790 containerd[2417]: time="2026-01-14T01:22:01.054753472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:01.245151 sshd[6721]: Connection closed by 10.200.16.10 port 60994 Jan 14 01:22:01.246092 sshd-session[6717]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:01.246000 audit[6717]: USER_END pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:01.251310 systemd[1]: sshd@20-10.200.4.14:22-10.200.16.10:60994.service: Deactivated successfully. Jan 14 01:22:01.253992 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:22:01.255268 systemd-logind[2405]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:22:01.246000 audit[6717]: CRED_DISP pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:01.257398 systemd-logind[2405]: Removed session 24. Jan 14 01:22:01.261067 kernel: audit: type=1106 audit(1768353721.246:925): pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:01.261147 kernel: audit: type=1104 audit(1768353721.246:926): pid=6717 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:01.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.14:22-10.200.16.10:60994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:01.333630 containerd[2417]: time="2026-01-14T01:22:01.333585892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:01.338011 containerd[2417]: time="2026-01-14T01:22:01.337965204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:01.338098 containerd[2417]: time="2026-01-14T01:22:01.338068944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:01.338285 kubelet[3936]: E0114 01:22:01.338233 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:01.338786 kubelet[3936]: E0114 01:22:01.338552 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:01.338786 kubelet[3936]: E0114 01:22:01.338733 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7758cb5d69-rt2pc_calico-apiserver(b4f27767-b32c-43ae-95eb-f1d5e5f34f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:01.340159 kubelet[3936]: E0114 01:22:01.340116 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:22:02.051271 kubelet[3936]: E0114 01:22:02.051174 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:22:04.050740 containerd[2417]: time="2026-01-14T01:22:04.050689387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:04.341533 containerd[2417]: time="2026-01-14T01:22:04.341267197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:04.346866 containerd[2417]: time="2026-01-14T01:22:04.346815415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:04.346866 containerd[2417]: time="2026-01-14T01:22:04.346842065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:04.347060 kubelet[3936]: E0114 01:22:04.347021 3936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:04.347336 kubelet[3936]: E0114 01:22:04.347069 3936 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:04.347627 kubelet[3936]: E0114 01:22:04.347546 3936 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dg8l8_calico-system(f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:04.348750 kubelet[3936]: E0114 01:22:04.348718 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:22:05.050835 kubelet[3936]: E0114 01:22:05.050570 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:22:06.049605 kubelet[3936]: E0114 01:22:06.049565 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:22:06.364137 systemd[1]: Started sshd@21-10.200.4.14:22-10.200.16.10:32770.service - OpenSSH per-connection server daemon (10.200.16.10:32770). Jan 14 01:22:06.371447 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:06.371536 kernel: audit: type=1130 audit(1768353726.363:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.14:22-10.200.16.10:32770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.14:22-10.200.16.10:32770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.910000 audit[6736]: USER_ACCT pid=6736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.916812 sshd[6736]: Accepted publickey for core from 10.200.16.10 port 32770 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:06.916686 sshd-session[6736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:06.917969 kernel: audit: type=1101 audit(1768353726.910:929): pid=6736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.923990 systemd-logind[2405]: New session 25 of user core. Jan 14 01:22:06.914000 audit[6736]: CRED_ACQ pid=6736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.935513 kernel: audit: type=1103 audit(1768353726.914:930): pid=6736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.935586 kernel: audit: type=1006 audit(1768353726.914:931): pid=6736 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:22:06.935784 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:22:06.943727 kernel: audit: type=1300 audit(1768353726.914:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffddaf01c0 a2=3 a3=0 items=0 ppid=1 pid=6736 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:06.914000 audit[6736]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffddaf01c0 a2=3 a3=0 items=0 ppid=1 pid=6736 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:06.914000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:06.952418 kernel: audit: type=1327 audit(1768353726.914:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:06.952491 kernel: audit: type=1105 audit(1768353726.943:932): pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.943000 audit[6736]: USER_START pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.959483 kernel: audit: type=1103 audit(1768353726.951:933): pid=6740 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:06.951000 audit[6740]: CRED_ACQ pid=6740 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:07.338045 sshd[6740]: Connection closed by 10.200.16.10 port 32770 Jan 14 01:22:07.340422 sshd-session[6736]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:07.342000 audit[6736]: USER_END pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:07.348274 systemd[1]: sshd@21-10.200.4.14:22-10.200.16.10:32770.service: Deactivated successfully. Jan 14 01:22:07.349675 kernel: audit: type=1106 audit(1768353727.342:934): pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:07.342000 audit[6736]: CRED_DISP pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:07.353970 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:22:07.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.14:22-10.200.16.10:32770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.355755 kernel: audit: type=1104 audit(1768353727.342:935): pid=6736 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:07.356524 systemd-logind[2405]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:22:07.357895 systemd-logind[2405]: Removed session 25. Jan 14 01:22:08.049058 kubelet[3936]: E0114 01:22:08.049019 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:22:09.052013 kubelet[3936]: E0114 01:22:09.051191 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:22:12.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.14:22-10.200.16.10:42656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:12.464952 systemd[1]: Started sshd@22-10.200.4.14:22-10.200.16.10:42656.service - OpenSSH per-connection server daemon (10.200.16.10:42656). Jan 14 01:22:12.466119 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:12.466151 kernel: audit: type=1130 audit(1768353732.463:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.14:22-10.200.16.10:42656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:13.015000 audit[6752]: USER_ACCT pid=6752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.017276 sshd[6752]: Accepted publickey for core from 10.200.16.10 port 42656 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:13.021623 sshd-session[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:13.019000 audit[6752]: CRED_ACQ pid=6752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.025322 kernel: audit: type=1101 audit(1768353733.015:938): pid=6752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.025478 kernel: audit: type=1103 audit(1768353733.019:939): pid=6752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.029251 kernel: audit: type=1006 audit(1768353733.019:940): pid=6752 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:22:13.019000 audit[6752]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec73007e0 a2=3 a3=0 items=0 ppid=1 pid=6752 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:13.033548 kernel: audit: type=1300 audit(1768353733.019:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec73007e0 a2=3 a3=0 items=0 ppid=1 pid=6752 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:13.034739 systemd-logind[2405]: New session 26 of user core. Jan 14 01:22:13.019000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:13.041653 kernel: audit: type=1327 audit(1768353733.019:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:13.042746 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:22:13.044000 audit[6752]: USER_START pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.054766 kernel: audit: type=1105 audit(1768353733.044:941): pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.054818 kernel: audit: type=1103 audit(1768353733.044:942): pid=6756 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.044000 audit[6756]: CRED_ACQ pid=6756 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.398442 sshd[6756]: Connection closed by 10.200.16.10 port 42656 Jan 14 01:22:13.400374 sshd-session[6752]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:13.400000 audit[6752]: USER_END pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.408662 kernel: audit: type=1106 audit(1768353733.400:943): pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.407847 systemd[1]: sshd@22-10.200.4.14:22-10.200.16.10:42656.service: Deactivated successfully. Jan 14 01:22:13.411194 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:22:13.400000 audit[6752]: CRED_DISP pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.414030 systemd-logind[2405]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:22:13.417665 kernel: audit: type=1104 audit(1768353733.400:944): pid=6752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:13.418138 systemd-logind[2405]: Removed session 26. Jan 14 01:22:13.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.14:22-10.200.16.10:42656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.050324 kubelet[3936]: E0114 01:22:14.050266 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:22:15.050366 kubelet[3936]: E0114 01:22:15.049790 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:22:17.051149 kubelet[3936]: E0114 01:22:17.051084 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:22:18.513320 systemd[1]: Started sshd@23-10.200.4.14:22-10.200.16.10:42660.service - OpenSSH per-connection server daemon (10.200.16.10:42660). Jan 14 01:22:18.517972 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:18.518038 kernel: audit: type=1130 audit(1768353738.512:946): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.14:22-10.200.16.10:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:18.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.14:22-10.200.16.10:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:18.597000 audit[6797]: NETFILTER_CFG table=filter:159 family=2 entries=26 op=nft_register_rule pid=6797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:18.604685 kernel: audit: type=1325 audit(1768353738.597:947): table=filter:159 family=2 entries=26 op=nft_register_rule pid=6797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:18.597000 audit[6797]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc92d25aa0 a2=0 a3=7ffc92d25a8c items=0 ppid=4040 pid=6797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:18.612747 kernel: audit: type=1300 audit(1768353738.597:947): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc92d25aa0 a2=0 a3=7ffc92d25a8c items=0 ppid=4040 pid=6797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:18.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:18.617667 kernel: audit: type=1327 audit(1768353738.597:947): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:18.615000 audit[6797]: NETFILTER_CFG table=nat:160 family=2 entries=104 op=nft_register_chain pid=6797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:18.627282 kernel: audit: type=1325 audit(1768353738.615:948): table=nat:160 family=2 entries=104 op=nft_register_chain pid=6797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:18.627346 kernel: audit: type=1300 audit(1768353738.615:948): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc92d25aa0 a2=0 a3=7ffc92d25a8c items=0 ppid=4040 pid=6797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:18.615000 audit[6797]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc92d25aa0 a2=0 a3=7ffc92d25a8c items=0 ppid=4040 pid=6797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:18.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:18.631651 kernel: audit: type=1327 audit(1768353738.615:948): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:19.050909 kubelet[3936]: E0114 01:22:19.049323 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:22:19.067000 audit[6793]: USER_ACCT pid=6793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.069745 sshd-session[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:19.070549 sshd[6793]: Accepted publickey for core from 10.200.16.10 port 42660 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:19.082404 systemd-logind[2405]: New session 27 of user core. Jan 14 01:22:19.083694 kernel: audit: type=1101 audit(1768353739.067:949): pid=6793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.067000 audit[6793]: CRED_ACQ pid=6793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.091511 kernel: audit: type=1103 audit(1768353739.067:950): pid=6793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.090962 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:22:19.096736 kernel: audit: type=1006 audit(1768353739.067:951): pid=6793 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:22:19.067000 audit[6793]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9bb16220 a2=3 a3=0 items=0 ppid=1 pid=6793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:19.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:19.093000 audit[6793]: USER_START pid=6793 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.096000 audit[6799]: CRED_ACQ pid=6799 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.414556 sshd[6799]: Connection closed by 10.200.16.10 port 42660 Jan 14 01:22:19.414906 sshd-session[6793]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:19.416000 audit[6793]: USER_END pid=6793 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.416000 audit[6793]: CRED_DISP pid=6793 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:19.419030 systemd-logind[2405]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:22:19.419267 systemd[1]: sshd@23-10.200.4.14:22-10.200.16.10:42660.service: Deactivated successfully. Jan 14 01:22:19.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.14:22-10.200.16.10:42660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.421247 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:22:19.423547 systemd-logind[2405]: Removed session 27. Jan 14 01:22:20.051538 kubelet[3936]: E0114 01:22:20.051497 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:22:21.050257 kubelet[3936]: E0114 01:22:21.050223 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:22:22.051512 kubelet[3936]: E0114 01:22:22.050862 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:22:24.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.14:22-10.200.16.10:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:24.527774 systemd[1]: Started sshd@24-10.200.4.14:22-10.200.16.10:34142.service - OpenSSH per-connection server daemon (10.200.16.10:34142). Jan 14 01:22:24.529057 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:22:24.529099 kernel: audit: type=1130 audit(1768353744.527:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.14:22-10.200.16.10:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:25.081000 audit[6812]: USER_ACCT pid=6812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.084250 sshd-session[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:25.085797 sshd[6812]: Accepted publickey for core from 10.200.16.10 port 34142 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:25.089656 kernel: audit: type=1101 audit(1768353745.081:958): pid=6812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.082000 audit[6812]: CRED_ACQ pid=6812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.094136 systemd-logind[2405]: New session 28 of user core. Jan 14 01:22:25.099225 kernel: audit: type=1103 audit(1768353745.082:959): pid=6812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.099298 kernel: audit: type=1006 audit(1768353745.082:960): pid=6812 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 01:22:25.082000 audit[6812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4758f3a0 a2=3 a3=0 items=0 ppid=1 pid=6812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:25.104139 kernel: audit: type=1300 audit(1768353745.082:960): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4758f3a0 a2=3 a3=0 items=0 ppid=1 pid=6812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:25.104836 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:22:25.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:25.108378 kernel: audit: type=1327 audit(1768353745.082:960): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:25.108000 audit[6812]: USER_START pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.115039 kernel: audit: type=1105 audit(1768353745.108:961): pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.110000 audit[6816]: CRED_ACQ pid=6816 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.120751 kernel: audit: type=1103 audit(1768353745.110:962): pid=6816 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.435488 sshd[6816]: Connection closed by 10.200.16.10 port 34142 Jan 14 01:22:25.437085 sshd-session[6812]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:25.437000 audit[6812]: USER_END pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.440889 systemd[1]: sshd@24-10.200.4.14:22-10.200.16.10:34142.service: Deactivated successfully. Jan 14 01:22:25.443420 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:22:25.445915 systemd-logind[2405]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:22:25.447046 systemd-logind[2405]: Removed session 28. Jan 14 01:22:25.437000 audit[6812]: CRED_DISP pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.451212 kernel: audit: type=1106 audit(1768353745.437:963): pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.451270 kernel: audit: type=1104 audit(1768353745.437:964): pid=6812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:25.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.14:22-10.200.16.10:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:26.049942 kubelet[3936]: E0114 01:22:26.049879 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:22:27.051038 kubelet[3936]: E0114 01:22:27.050221 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:22:28.050844 kubelet[3936]: E0114 01:22:28.050799 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:22:30.049421 kubelet[3936]: E0114 01:22:30.049341 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:22:30.547907 systemd[1]: Started sshd@25-10.200.4.14:22-10.200.16.10:45760.service - OpenSSH per-connection server daemon (10.200.16.10:45760). Jan 14 01:22:30.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.14:22-10.200.16.10:45760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.553817 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:30.553903 kernel: audit: type=1130 audit(1768353750.547:966): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.14:22-10.200.16.10:45760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:31.096000 audit[6830]: USER_ACCT pid=6830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.105020 sshd[6830]: Accepted publickey for core from 10.200.16.10 port 45760 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:31.105703 kernel: audit: type=1101 audit(1768353751.096:967): pid=6830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.106983 sshd-session[6830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:31.105000 audit[6830]: CRED_ACQ pid=6830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.120617 kernel: audit: type=1103 audit(1768353751.105:968): pid=6830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.120706 kernel: audit: type=1006 audit(1768353751.105:969): pid=6830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 01:22:31.105000 audit[6830]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe38dece00 a2=3 a3=0 items=0 ppid=1 pid=6830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:31.121489 systemd-logind[2405]: New session 29 of user core. Jan 14 01:22:31.127623 kernel: audit: type=1300 audit(1768353751.105:969): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe38dece00 a2=3 a3=0 items=0 ppid=1 pid=6830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:31.105000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:31.132467 kernel: audit: type=1327 audit(1768353751.105:969): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:31.133870 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:22:31.137000 audit[6830]: USER_START pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.155942 kernel: audit: type=1105 audit(1768353751.137:970): pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.156018 kernel: audit: type=1103 audit(1768353751.139:971): pid=6834 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.139000 audit[6834]: CRED_ACQ pid=6834 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.487336 sshd[6834]: Connection closed by 10.200.16.10 port 45760 Jan 14 01:22:31.489068 sshd-session[6830]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:31.489000 audit[6830]: USER_END pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.489000 audit[6830]: CRED_DISP pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.497656 kernel: audit: type=1106 audit(1768353751.489:972): pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.497695 kernel: audit: type=1104 audit(1768353751.489:973): pid=6830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.497922 systemd[1]: sshd@25-10.200.4.14:22-10.200.16.10:45760.service: Deactivated successfully. Jan 14 01:22:31.500900 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:22:31.503776 systemd-logind[2405]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:22:31.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.14:22-10.200.16.10:45760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:31.505321 systemd-logind[2405]: Removed session 29. Jan 14 01:22:32.048760 kubelet[3936]: E0114 01:22:32.048724 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:22:34.049997 kubelet[3936]: E0114 01:22:34.049932 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:22:36.049353 kubelet[3936]: E0114 01:22:36.049199 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:22:36.599110 systemd[1]: Started sshd@26-10.200.4.14:22-10.200.16.10:45762.service - OpenSSH per-connection server daemon (10.200.16.10:45762). Jan 14 01:22:36.606249 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:36.606291 kernel: audit: type=1130 audit(1768353756.597:975): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.14:22-10.200.16.10:45762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:36.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.14:22-10.200.16.10:45762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:37.136982 sshd[6849]: Accepted publickey for core from 10.200.16.10 port 45762 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:37.143659 kernel: audit: type=1101 audit(1768353757.135:976): pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.135000 audit[6849]: USER_ACCT pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.146525 sshd-session[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:37.142000 audit[6849]: CRED_ACQ pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.154662 kernel: audit: type=1103 audit(1768353757.142:977): pid=6849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.166659 kernel: audit: type=1006 audit(1768353757.142:978): pid=6849 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 01:22:37.142000 audit[6849]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffdb3c6a0 a2=3 a3=0 items=0 ppid=1 pid=6849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:37.173099 systemd-logind[2405]: New session 30 of user core. Jan 14 01:22:37.175951 kernel: audit: type=1300 audit(1768353757.142:978): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffdb3c6a0 a2=3 a3=0 items=0 ppid=1 pid=6849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:37.177766 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:22:37.181660 kernel: audit: type=1327 audit(1768353757.142:978): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:37.142000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:37.200661 kernel: audit: type=1105 audit(1768353757.189:979): pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.189000 audit[6849]: USER_START pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.198000 audit[6860]: CRED_ACQ pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.213650 kernel: audit: type=1103 audit(1768353757.198:980): pid=6860 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.535193 sshd[6860]: Connection closed by 10.200.16.10 port 45762 Jan 14 01:22:37.535780 sshd-session[6849]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:37.535000 audit[6849]: USER_END pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.544741 kernel: audit: type=1106 audit(1768353757.535:981): pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.543000 audit[6849]: CRED_DISP pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.547951 systemd[1]: sshd@26-10.200.4.14:22-10.200.16.10:45762.service: Deactivated successfully. Jan 14 01:22:37.551982 systemd-logind[2405]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:22:37.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.14:22-10.200.16.10:45762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:37.552722 kernel: audit: type=1104 audit(1768353757.543:982): pid=6849 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.553821 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:22:37.557477 systemd-logind[2405]: Removed session 30. Jan 14 01:22:40.049368 kubelet[3936]: E0114 01:22:40.049329 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4" Jan 14 01:22:41.049478 kubelet[3936]: E0114 01:22:41.049137 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:22:42.049379 kubelet[3936]: E0114 01:22:42.049126 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-mp9tk" podUID="f6926f72-9c01-4e67-abef-2eb546c46570" Jan 14 01:22:42.049379 kubelet[3936]: E0114 01:22:42.049172 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7758cb5d69-rt2pc" podUID="b4f27767-b32c-43ae-95eb-f1d5e5f34f59" Jan 14 01:22:42.649587 systemd[1]: Started sshd@27-10.200.4.14:22-10.200.16.10:49034.service - OpenSSH per-connection server daemon (10.200.16.10:49034). Jan 14 01:22:42.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.14:22-10.200.16.10:49034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:42.653204 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:42.653495 kernel: audit: type=1130 audit(1768353762.649:984): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.14:22-10.200.16.10:49034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:43.214000 audit[6872]: USER_ACCT pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.217237 sshd-session[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:43.217874 sshd[6872]: Accepted publickey for core from 10.200.16.10 port 49034 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:43.215000 audit[6872]: CRED_ACQ pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.225779 systemd-logind[2405]: New session 31 of user core. Jan 14 01:22:43.228706 kernel: audit: type=1101 audit(1768353763.214:985): pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.228801 kernel: audit: type=1103 audit(1768353763.215:986): pid=6872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.230867 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:22:43.234296 kernel: audit: type=1006 audit(1768353763.215:987): pid=6872 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 01:22:43.215000 audit[6872]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadc8abd0 a2=3 a3=0 items=0 ppid=1 pid=6872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:43.240651 kernel: audit: type=1300 audit(1768353763.215:987): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadc8abd0 a2=3 a3=0 items=0 ppid=1 pid=6872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:43.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:43.246656 kernel: audit: type=1327 audit(1768353763.215:987): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:43.236000 audit[6872]: USER_START pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.246000 audit[6876]: CRED_ACQ pid=6876 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.260349 kernel: audit: type=1105 audit(1768353763.236:988): pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.260396 kernel: audit: type=1103 audit(1768353763.246:989): pid=6876 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.571800 sshd[6876]: Connection closed by 10.200.16.10 port 49034 Jan 14 01:22:43.574793 sshd-session[6872]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:43.587658 kernel: audit: type=1106 audit(1768353763.576:990): pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.576000 audit[6872]: USER_END pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.579613 systemd[1]: sshd@27-10.200.4.14:22-10.200.16.10:49034.service: Deactivated successfully. Jan 14 01:22:43.582154 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:22:43.583164 systemd-logind[2405]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:22:43.587094 systemd-logind[2405]: Removed session 31. Jan 14 01:22:43.576000 audit[6872]: CRED_DISP pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.601675 kernel: audit: type=1104 audit(1768353763.576:991): pid=6872 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.14:22-10.200.16.10:49034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.049446 kubelet[3936]: E0114 01:22:44.049401 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8778f546-8gqr9" podUID="73f6bb79-7f15-4fdc-bde4-bdf058188aed" Jan 14 01:22:48.698390 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:48.698502 kernel: audit: type=1130 audit(1768353768.690:993): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.14:22-10.200.16.10:49036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:48.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.14:22-10.200.16.10:49036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:48.690964 systemd[1]: Started sshd@28-10.200.4.14:22-10.200.16.10:49036.service - OpenSSH per-connection server daemon (10.200.16.10:49036). Jan 14 01:22:49.053794 kubelet[3936]: E0114 01:22:49.053749 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69445f69fb-5cr2q" podUID="ed6d8542-dfd5-4ecd-928d-cf86db3537f3" Jan 14 01:22:49.254000 audit[6916]: USER_ACCT pid=6916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.255042 sshd[6916]: Accepted publickey for core from 10.200.16.10 port 49036 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:49.261162 sshd-session[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:49.258000 audit[6916]: CRED_ACQ pid=6916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.266577 kernel: audit: type=1101 audit(1768353769.254:994): pid=6916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.266653 kernel: audit: type=1103 audit(1768353769.258:995): pid=6916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.268672 kernel: audit: type=1006 audit(1768353769.258:996): pid=6916 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 14 01:22:49.268300 systemd-logind[2405]: New session 32 of user core. Jan 14 01:22:49.258000 audit[6916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff88cae00 a2=3 a3=0 items=0 ppid=1 pid=6916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:49.272854 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 01:22:49.278265 kernel: audit: type=1300 audit(1768353769.258:996): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff88cae00 a2=3 a3=0 items=0 ppid=1 pid=6916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:49.258000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:49.284701 kernel: audit: type=1327 audit(1768353769.258:996): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:49.284914 kernel: audit: type=1105 audit(1768353769.272:997): pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.272000 audit[6916]: USER_START pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.279000 audit[6920]: CRED_ACQ pid=6920 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.291656 kernel: audit: type=1103 audit(1768353769.279:998): pid=6920 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.614031 sshd[6920]: Connection closed by 10.200.16.10 port 49036 Jan 14 01:22:49.614442 sshd-session[6916]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:49.615000 audit[6916]: USER_END pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.618379 systemd[1]: sshd@28-10.200.4.14:22-10.200.16.10:49036.service: Deactivated successfully. Jan 14 01:22:49.620746 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 01:22:49.623159 systemd-logind[2405]: Session 32 logged out. Waiting for processes to exit. Jan 14 01:22:49.623952 systemd-logind[2405]: Removed session 32. Jan 14 01:22:49.615000 audit[6916]: CRED_DISP pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.630602 kernel: audit: type=1106 audit(1768353769.615:999): pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.631577 kernel: audit: type=1104 audit(1768353769.615:1000): pid=6916 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:49.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.14:22-10.200.16.10:49036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:50.049731 kubelet[3936]: E0114 01:22:50.049695 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f77f5cb44-nf9jt" podUID="1876e14b-df10-499c-9b9b-1ece31d0136a" Jan 14 01:22:52.050445 kubelet[3936]: E0114 01:22:52.050401 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dg8l8" podUID="f1cc4c9c-5d75-49d5-a28f-b34a79d2a4c5" Jan 14 01:22:52.051944 kubelet[3936]: E0114 01:22:52.051902 3936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bg7tj" podUID="0ad549b6-0df1-4bac-8f3a-1bc2943edac4"