Jan 21 00:57:48.012499 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 22:19:08 -00 2026 Jan 21 00:57:48.012528 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:57:48.012540 kernel: BIOS-provided physical RAM map: Jan 21 00:57:48.012547 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 00:57:48.012554 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 21 00:57:48.012561 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 21 00:57:48.012569 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 21 00:57:48.012576 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 21 00:57:48.012583 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 21 00:57:48.012592 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 21 00:57:48.012599 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 21 00:57:48.012607 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 21 00:57:48.012614 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 21 00:57:48.012620 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 21 00:57:48.012629 kernel: NX (Execute Disable) protection: active Jan 21 00:57:48.012638 kernel: APIC: Static calls initialized Jan 21 00:57:48.012645 kernel: efi: EFI v2.7 by Microsoft Jan 21 00:57:48.012653 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f426618 RNG=0x3ffd2018 Jan 21 00:57:48.012660 kernel: random: crng init done Jan 21 00:57:48.012668 kernel: secureboot: Secure boot disabled Jan 21 00:57:48.012676 kernel: SMBIOS 3.1.0 present. Jan 21 00:57:48.012684 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 21 00:57:48.012692 kernel: DMI: Memory slots populated: 2/2 Jan 21 00:57:48.012699 kernel: Hypervisor detected: Microsoft Hyper-V Jan 21 00:57:48.012706 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 21 00:57:48.012715 kernel: Hyper-V: Nested features: 0x3e0101 Jan 21 00:57:48.012722 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 21 00:57:48.012729 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 21 00:57:48.012737 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 21 00:57:48.012745 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 21 00:57:48.012753 kernel: tsc: Detected 2300.001 MHz processor Jan 21 00:57:48.012761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 21 00:57:48.012780 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 21 00:57:48.012788 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 21 00:57:48.012798 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 21 00:57:48.012806 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 21 00:57:48.012814 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 21 00:57:48.012822 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 21 00:57:48.012830 kernel: Using GB pages for direct mapping Jan 21 00:57:48.012839 kernel: ACPI: Early table checksum verification disabled Jan 21 00:57:48.012853 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 21 00:57:48.012861 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012869 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012877 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 21 00:57:48.012886 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 21 00:57:48.012894 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012905 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012913 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012922 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 21 00:57:48.012931 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 21 00:57:48.012939 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 21 00:57:48.012947 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 21 00:57:48.012957 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 21 00:57:48.012965 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 21 00:57:48.012974 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 21 00:57:48.012982 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 21 00:57:48.012991 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 21 00:57:48.013000 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 21 00:57:48.013008 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 21 00:57:48.013018 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 21 00:57:48.013026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 21 00:57:48.013034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 21 00:57:48.013043 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 21 00:57:48.013051 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 21 00:57:48.013060 kernel: Zone ranges: Jan 21 00:57:48.013069 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 21 00:57:48.013079 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 21 00:57:48.013088 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 21 00:57:48.013096 kernel: Device empty Jan 21 00:57:48.013106 kernel: Movable zone start for each node Jan 21 00:57:48.013115 kernel: Early memory node ranges Jan 21 00:57:48.013125 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 21 00:57:48.013135 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 21 00:57:48.013149 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 21 00:57:48.013159 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 21 00:57:48.013168 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 21 00:57:48.013176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 21 00:57:48.013185 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 00:57:48.013194 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 21 00:57:48.013203 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 21 00:57:48.013214 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 21 00:57:48.013226 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 21 00:57:48.013236 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 21 00:57:48.013245 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 21 00:57:48.013255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 21 00:57:48.013264 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 21 00:57:48.013273 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 21 00:57:48.013282 kernel: TSC deadline timer available Jan 21 00:57:48.013294 kernel: CPU topo: Max. logical packages: 1 Jan 21 00:57:48.013303 kernel: CPU topo: Max. logical dies: 1 Jan 21 00:57:48.013312 kernel: CPU topo: Max. dies per package: 1 Jan 21 00:57:48.013321 kernel: CPU topo: Max. threads per core: 2 Jan 21 00:57:48.013329 kernel: CPU topo: Num. cores per package: 1 Jan 21 00:57:48.013338 kernel: CPU topo: Num. threads per package: 2 Jan 21 00:57:48.013347 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 21 00:57:48.013360 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 21 00:57:48.013369 kernel: Booting paravirtualized kernel on Hyper-V Jan 21 00:57:48.013379 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 21 00:57:48.013388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 21 00:57:48.013397 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 21 00:57:48.013406 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 21 00:57:48.013415 kernel: pcpu-alloc: [0] 0 1 Jan 21 00:57:48.013427 kernel: Hyper-V: PV spinlocks enabled Jan 21 00:57:48.013437 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 21 00:57:48.013448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:57:48.013458 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 21 00:57:48.013468 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 21 00:57:48.013477 kernel: Fallback order for Node 0: 0 Jan 21 00:57:48.013490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 21 00:57:48.013499 kernel: Policy zone: Normal Jan 21 00:57:48.013508 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 00:57:48.013516 kernel: software IO TLB: area num 2. Jan 21 00:57:48.013526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 00:57:48.013536 kernel: ftrace: allocating 40097 entries in 157 pages Jan 21 00:57:48.013545 kernel: ftrace: allocated 157 pages with 5 groups Jan 21 00:57:48.013555 kernel: Dynamic Preempt: voluntary Jan 21 00:57:48.013566 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 00:57:48.013577 kernel: rcu: RCU event tracing is enabled. Jan 21 00:57:48.013595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 00:57:48.013606 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 00:57:48.013616 kernel: Rude variant of Tasks RCU enabled. Jan 21 00:57:48.013626 kernel: Tracing variant of Tasks RCU enabled. Jan 21 00:57:48.013635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 00:57:48.013644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 00:57:48.013655 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:57:48.013670 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:57:48.013681 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:57:48.013691 kernel: Using NULL legacy PIC Jan 21 00:57:48.013701 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 21 00:57:48.013713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 00:57:48.013722 kernel: Console: colour dummy device 80x25 Jan 21 00:57:48.013732 kernel: printk: legacy console [tty1] enabled Jan 21 00:57:48.013742 kernel: printk: legacy console [ttyS0] enabled Jan 21 00:57:48.013752 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 21 00:57:48.013761 kernel: ACPI: Core revision 20240827 Jan 21 00:57:48.013782 kernel: Failed to register legacy timer interrupt Jan 21 00:57:48.013798 kernel: APIC: Switch to symmetric I/O mode setup Jan 21 00:57:48.013807 kernel: x2apic enabled Jan 21 00:57:48.013819 kernel: APIC: Switched APIC routing to: physical x2apic Jan 21 00:57:48.013829 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 21 00:57:48.013839 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 21 00:57:48.013848 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 21 00:57:48.013859 kernel: Hyper-V: Using IPI hypercalls Jan 21 00:57:48.013871 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 21 00:57:48.013881 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 21 00:57:48.013890 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 21 00:57:48.013899 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 21 00:57:48.013909 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 21 00:57:48.013918 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 21 00:57:48.013927 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jan 21 00:57:48.013939 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Jan 21 00:57:48.013949 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 21 00:57:48.013958 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 21 00:57:48.013967 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 21 00:57:48.013976 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 21 00:57:48.013984 kernel: Spectre V2 : Mitigation: Retpolines Jan 21 00:57:48.013993 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 21 00:57:48.014002 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 21 00:57:48.014015 kernel: RETBleed: Vulnerable Jan 21 00:57:48.014024 kernel: Speculative Store Bypass: Vulnerable Jan 21 00:57:48.014032 kernel: active return thunk: its_return_thunk Jan 21 00:57:48.014041 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 21 00:57:48.014049 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 21 00:57:48.014057 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 21 00:57:48.014066 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 21 00:57:48.014075 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 21 00:57:48.014085 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 21 00:57:48.014094 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 21 00:57:48.014106 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 21 00:57:48.014116 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 21 00:57:48.014124 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 21 00:57:48.014134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 21 00:57:48.014143 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 21 00:57:48.014152 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 21 00:57:48.014161 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 21 00:57:48.014171 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 21 00:57:48.014179 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 21 00:57:48.014187 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 21 00:57:48.014196 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 21 00:57:48.014205 kernel: Freeing SMP alternatives memory: 32K Jan 21 00:57:48.014213 kernel: pid_max: default: 32768 minimum: 301 Jan 21 00:57:48.014221 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 00:57:48.014229 kernel: landlock: Up and running. Jan 21 00:57:48.014237 kernel: SELinux: Initializing. Jan 21 00:57:48.014245 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 21 00:57:48.014253 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 21 00:57:48.014262 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 21 00:57:48.014270 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 21 00:57:48.014278 kernel: signal: max sigframe size: 11952 Jan 21 00:57:48.014288 kernel: rcu: Hierarchical SRCU implementation. Jan 21 00:57:48.014297 kernel: rcu: Max phase no-delay instances is 400. Jan 21 00:57:48.014307 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 00:57:48.014316 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 21 00:57:48.014325 kernel: smp: Bringing up secondary CPUs ... Jan 21 00:57:48.014334 kernel: smpboot: x86: Booting SMP configuration: Jan 21 00:57:48.014343 kernel: .... node #0, CPUs: #1 Jan 21 00:57:48.014353 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 00:57:48.014362 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 21 00:57:48.014370 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 283604K reserved, 0K cma-reserved) Jan 21 00:57:48.014379 kernel: devtmpfs: initialized Jan 21 00:57:48.014387 kernel: x86/mm: Memory block size: 128MB Jan 21 00:57:48.014396 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 21 00:57:48.014405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 00:57:48.014416 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 00:57:48.014425 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 00:57:48.014434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 00:57:48.014443 kernel: audit: initializing netlink subsys (disabled) Jan 21 00:57:48.014452 kernel: audit: type=2000 audit(1768957062.077:1): state=initialized audit_enabled=0 res=1 Jan 21 00:57:48.014461 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 00:57:48.014470 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 21 00:57:48.014478 kernel: cpuidle: using governor menu Jan 21 00:57:48.014489 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 00:57:48.014497 kernel: dca service started, version 1.12.1 Jan 21 00:57:48.014506 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 21 00:57:48.014515 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 21 00:57:48.014523 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 21 00:57:48.014532 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 00:57:48.014543 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 00:57:48.014552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 00:57:48.014561 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 00:57:48.014569 kernel: ACPI: Added _OSI(Module Device) Jan 21 00:57:48.014578 kernel: ACPI: Added _OSI(Processor Device) Jan 21 00:57:48.014587 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 00:57:48.014596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 21 00:57:48.014605 kernel: ACPI: Interpreter enabled Jan 21 00:57:48.014616 kernel: ACPI: PM: (supports S0 S5) Jan 21 00:57:48.014624 kernel: ACPI: Using IOAPIC for interrupt routing Jan 21 00:57:48.014633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 21 00:57:48.014642 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 21 00:57:48.014651 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 21 00:57:48.014660 kernel: iommu: Default domain type: Translated Jan 21 00:57:48.014669 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 21 00:57:48.014680 kernel: efivars: Registered efivars operations Jan 21 00:57:48.014689 kernel: PCI: Using ACPI for IRQ routing Jan 21 00:57:48.014698 kernel: PCI: System does not support PCI Jan 21 00:57:48.014707 kernel: vgaarb: loaded Jan 21 00:57:48.014716 kernel: clocksource: Switched to clocksource tsc-early Jan 21 00:57:48.014724 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 00:57:48.014733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 00:57:48.014743 kernel: pnp: PnP ACPI init Jan 21 00:57:48.014752 kernel: pnp: PnP ACPI: found 3 devices Jan 21 00:57:48.014761 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 21 00:57:48.014801 kernel: NET: Registered PF_INET protocol family Jan 21 00:57:48.014811 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 21 00:57:48.014820 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 21 00:57:48.014828 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 00:57:48.014840 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 21 00:57:48.014849 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 21 00:57:48.014858 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 21 00:57:48.014867 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 21 00:57:48.014901 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 21 00:57:48.014911 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 00:57:48.014920 kernel: NET: Registered PF_XDP protocol family Jan 21 00:57:48.014931 kernel: PCI: CLS 0 bytes, default 64 Jan 21 00:57:48.014939 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 21 00:57:48.014948 kernel: software IO TLB: mapped [mem 0x000000003a99d000-0x000000003e99d000] (64MB) Jan 21 00:57:48.014957 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 21 00:57:48.014967 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 21 00:57:48.014977 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jan 21 00:57:48.014987 kernel: clocksource: Switched to clocksource tsc Jan 21 00:57:48.014998 kernel: Initialise system trusted keyrings Jan 21 00:57:48.015007 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 21 00:57:48.015016 kernel: Key type asymmetric registered Jan 21 00:57:48.015024 kernel: Asymmetric key parser 'x509' registered Jan 21 00:57:48.015033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 21 00:57:48.015041 kernel: io scheduler mq-deadline registered Jan 21 00:57:48.015050 kernel: io scheduler kyber registered Jan 21 00:57:48.015061 kernel: io scheduler bfq registered Jan 21 00:57:48.015070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 21 00:57:48.015078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 00:57:48.015087 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 00:57:48.015097 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 21 00:57:48.015105 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 00:57:48.015115 kernel: i8042: PNP: No PS/2 controller found. Jan 21 00:57:48.015282 kernel: rtc_cmos 00:02: registered as rtc0 Jan 21 00:57:48.015391 kernel: rtc_cmos 00:02: setting system clock to 2026-01-21T00:57:44 UTC (1768957064) Jan 21 00:57:48.015491 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 21 00:57:48.015503 kernel: intel_pstate: Intel P-state driver initializing Jan 21 00:57:48.015512 kernel: efifb: probing for efifb Jan 21 00:57:48.015522 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 21 00:57:48.015535 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 21 00:57:48.015544 kernel: efifb: scrolling: redraw Jan 21 00:57:48.015554 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 21 00:57:48.015564 kernel: Console: switching to colour frame buffer device 128x48 Jan 21 00:57:48.015573 kernel: fb0: EFI VGA frame buffer device Jan 21 00:57:48.015583 kernel: pstore: Using crash dump compression: deflate Jan 21 00:57:48.015593 kernel: pstore: Registered efi_pstore as persistent store backend Jan 21 00:57:48.015604 kernel: NET: Registered PF_INET6 protocol family Jan 21 00:57:48.015614 kernel: Segment Routing with IPv6 Jan 21 00:57:48.015623 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 00:57:48.015632 kernel: NET: Registered PF_PACKET protocol family Jan 21 00:57:48.015642 kernel: Key type dns_resolver registered Jan 21 00:57:48.015652 kernel: IPI shorthand broadcast: enabled Jan 21 00:57:48.015662 kernel: sched_clock: Marking stable (2064004928, 92556367)->(2473734045, -317172750) Jan 21 00:57:48.015672 kernel: registered taskstats version 1 Jan 21 00:57:48.015683 kernel: Loading compiled-in X.509 certificates Jan 21 00:57:48.015693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 169e95345ec0c7da7389f5f6d7b9c06dfd352178' Jan 21 00:57:48.015703 kernel: Demotion targets for Node 0: null Jan 21 00:57:48.015712 kernel: Key type .fscrypt registered Jan 21 00:57:48.015722 kernel: Key type fscrypt-provisioning registered Jan 21 00:57:48.015731 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 00:57:48.015741 kernel: ima: Allocated hash algorithm: sha1 Jan 21 00:57:48.015752 kernel: ima: No architecture policies found Jan 21 00:57:48.015761 kernel: clk: Disabling unused clocks Jan 21 00:57:48.015790 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 21 00:57:48.015799 kernel: Write protecting the kernel read-only data: 47104k Jan 21 00:57:48.015808 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 21 00:57:48.015817 kernel: Run /init as init process Jan 21 00:57:48.015825 kernel: with arguments: Jan 21 00:57:48.015836 kernel: /init Jan 21 00:57:48.015844 kernel: with environment: Jan 21 00:57:48.015851 kernel: HOME=/ Jan 21 00:57:48.015860 kernel: TERM=linux Jan 21 00:57:48.015869 kernel: hv_vmbus: Vmbus version:5.3 Jan 21 00:57:48.015878 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 21 00:57:48.015886 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 21 00:57:48.015894 kernel: PTP clock support registered Jan 21 00:57:48.015904 kernel: hv_utils: Registering HyperV Utility Driver Jan 21 00:57:48.015912 kernel: hv_vmbus: registering driver hv_utils Jan 21 00:57:48.015920 kernel: hv_utils: Shutdown IC version 3.2 Jan 21 00:57:48.015928 kernel: hv_utils: Heartbeat IC version 3.0 Jan 21 00:57:48.015937 kernel: hv_utils: TimeSync IC version 4.0 Jan 21 00:57:48.015944 kernel: SCSI subsystem initialized Jan 21 00:57:48.015953 kernel: hv_vmbus: registering driver hv_pci Jan 21 00:57:48.016108 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 21 00:57:48.016229 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 21 00:57:48.016360 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 21 00:57:48.016477 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 21 00:57:48.016622 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 21 00:57:48.016751 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 21 00:57:48.016897 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 21 00:57:48.017025 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 21 00:57:48.017037 kernel: hv_vmbus: registering driver hv_storvsc Jan 21 00:57:48.017168 kernel: scsi host0: storvsc_host_t Jan 21 00:57:48.017304 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 21 00:57:48.017316 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 21 00:57:48.017325 kernel: hv_vmbus: registering driver hid_hyperv Jan 21 00:57:48.017334 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 21 00:57:48.017448 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 21 00:57:48.017460 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 21 00:57:48.017471 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 21 00:57:48.017572 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 21 00:57:48.017694 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 21 00:57:48.017794 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 21 00:57:48.017806 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 21 00:57:48.017926 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 21 00:57:48.017938 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 21 00:57:48.018077 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 21 00:57:48.018092 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 00:57:48.018102 kernel: device-mapper: uevent: version 1.0.3 Jan 21 00:57:48.018113 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 00:57:48.018124 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 21 00:57:48.018153 kernel: raid6: avx512x4 gen() 43079 MB/s Jan 21 00:57:48.018166 kernel: raid6: avx512x2 gen() 42685 MB/s Jan 21 00:57:48.018176 kernel: raid6: avx512x1 gen() 26416 MB/s Jan 21 00:57:48.018186 kernel: raid6: avx2x4 gen() 35704 MB/s Jan 21 00:57:48.018195 kernel: raid6: avx2x2 gen() 37171 MB/s Jan 21 00:57:48.018205 kernel: raid6: avx2x1 gen() 31570 MB/s Jan 21 00:57:48.018217 kernel: raid6: using algorithm avx512x4 gen() 43079 MB/s Jan 21 00:57:48.018231 kernel: raid6: .... xor() 7856 MB/s, rmw enabled Jan 21 00:57:48.018241 kernel: raid6: using avx512x2 recovery algorithm Jan 21 00:57:48.018251 kernel: xor: automatically using best checksumming function avx Jan 21 00:57:48.018261 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 00:57:48.018272 kernel: BTRFS: device fsid 1d50d7f2-b244-4434-b37e-796fa0c23345 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (880) Jan 21 00:57:48.018284 kernel: BTRFS info (device dm-0): first mount of filesystem 1d50d7f2-b244-4434-b37e-796fa0c23345 Jan 21 00:57:48.018294 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:57:48.018307 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 21 00:57:48.018318 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 00:57:48.018328 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 00:57:48.018338 kernel: loop: module loaded Jan 21 00:57:48.018347 kernel: loop0: detected capacity change from 0 to 100552 Jan 21 00:57:48.018357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 00:57:48.018368 systemd[1]: Successfully made /usr/ read-only. Jan 21 00:57:48.018389 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 00:57:48.018403 systemd[1]: Detected virtualization microsoft. Jan 21 00:57:48.018413 systemd[1]: Detected architecture x86-64. Jan 21 00:57:48.018425 systemd[1]: Running in initrd. Jan 21 00:57:48.018439 systemd[1]: No hostname configured, using default hostname. Jan 21 00:57:48.018450 systemd[1]: Hostname set to . Jan 21 00:57:48.018463 systemd[1]: Initializing machine ID from random generator. Jan 21 00:57:48.018473 systemd[1]: Queued start job for default target initrd.target. Jan 21 00:57:48.018483 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 00:57:48.018493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:57:48.018502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:57:48.018513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 00:57:48.018525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 00:57:48.018536 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 00:57:48.018546 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 00:57:48.018557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:57:48.018567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:57:48.018579 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 00:57:48.018589 systemd[1]: Reached target paths.target - Path Units. Jan 21 00:57:48.018599 systemd[1]: Reached target slices.target - Slice Units. Jan 21 00:57:48.018609 systemd[1]: Reached target swap.target - Swaps. Jan 21 00:57:48.018619 systemd[1]: Reached target timers.target - Timer Units. Jan 21 00:57:48.018631 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 00:57:48.018641 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 00:57:48.018651 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:57:48.018661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 00:57:48.018671 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 00:57:48.018681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:57:48.018691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 00:57:48.018704 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:57:48.018714 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 00:57:48.018725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 00:57:48.018736 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 00:57:48.018746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 00:57:48.018757 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 00:57:48.018781 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 00:57:48.018794 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 00:57:48.018804 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 00:57:48.018814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 00:57:48.018825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:48.018837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 00:57:48.018848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:57:48.018858 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 00:57:48.018868 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 00:57:48.018898 systemd-journald[1014]: Collecting audit messages is enabled. Jan 21 00:57:48.018924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 00:57:48.018936 kernel: audit: type=1130 audit(1768957068.007:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.018948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 00:57:48.018959 systemd-journald[1014]: Journal started Jan 21 00:57:48.018984 systemd-journald[1014]: Runtime Journal (/run/log/journal/ac3d294431ee46c79123b4aabfdb7bad) is 8M, max 158.5M, 150.5M free. Jan 21 00:57:48.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.022788 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 00:57:48.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.027976 kernel: audit: type=1130 audit(1768957068.020:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.027552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 00:57:48.056650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:57:48.057144 systemd-tmpfiles[1031]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 00:57:48.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.069787 kernel: audit: type=1130 audit(1768957068.063:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.082787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 00:57:48.110892 systemd-modules-load[1017]: Inserted module 'br_netfilter' Jan 21 00:57:48.112112 kernel: Bridge firewalling registered Jan 21 00:57:48.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.112372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:57:48.123194 kernel: audit: type=1130 audit(1768957068.112:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.123219 kernel: audit: type=1130 audit(1768957068.117:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.116975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 00:57:48.125007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 00:57:48.134940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:48.141639 kernel: audit: type=1130 audit(1768957068.134:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.139212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 00:57:48.153963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:57:48.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.160202 kernel: audit: type=1130 audit(1768957068.155:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.161905 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 00:57:48.159000 audit: BPF prog-id=6 op=LOAD Jan 21 00:57:48.168619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 00:57:48.170975 kernel: audit: type=1334 audit(1768957068.159:9): prog-id=6 op=LOAD Jan 21 00:57:48.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.175312 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 00:57:48.180126 kernel: audit: type=1130 audit(1768957068.173:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.196691 dracut-cmdline[1055]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:57:48.223513 systemd-resolved[1051]: Positive Trust Anchors: Jan 21 00:57:48.225388 systemd-resolved[1051]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 00:57:48.225679 systemd-resolved[1051]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 00:57:48.231222 systemd-resolved[1051]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 00:57:48.301510 systemd-resolved[1051]: Defaulting to hostname 'linux'. Jan 21 00:57:48.302353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 00:57:48.316866 kernel: audit: type=1130 audit(1768957068.303:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.304144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:57:48.368788 kernel: Loading iSCSI transport class v2.0-870. Jan 21 00:57:48.449787 kernel: iscsi: registered transport (tcp) Jan 21 00:57:48.511941 kernel: iscsi: registered transport (qla4xxx) Jan 21 00:57:48.512000 kernel: QLogic iSCSI HBA Driver Jan 21 00:57:48.559529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 00:57:48.580378 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:57:48.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.581727 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 00:57:48.588883 kernel: audit: type=1130 audit(1768957068.580:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.624597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 00:57:48.630673 kernel: audit: type=1130 audit(1768957068.624:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.630913 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 00:57:48.633664 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 00:57:48.660222 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 00:57:48.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.669898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:57:48.671784 kernel: audit: type=1130 audit(1768957068.665:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.671815 kernel: audit: type=1334 audit(1768957068.667:15): prog-id=7 op=LOAD Jan 21 00:57:48.667000 audit: BPF prog-id=7 op=LOAD Jan 21 00:57:48.673324 kernel: audit: type=1334 audit(1768957068.667:16): prog-id=8 op=LOAD Jan 21 00:57:48.667000 audit: BPF prog-id=8 op=LOAD Jan 21 00:57:48.697422 systemd-udevd[1282]: Using default interface naming scheme 'v257'. Jan 21 00:57:48.709647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:57:48.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.720792 kernel: audit: type=1130 audit(1768957068.714:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.721764 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 00:57:48.747471 dracut-pre-trigger[1356]: rd.md=0: removing MD RAID activation Jan 21 00:57:48.752757 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 00:57:48.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.761792 kernel: audit: type=1130 audit(1768957068.757:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.760000 audit: BPF prog-id=9 op=LOAD Jan 21 00:57:48.764607 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 00:57:48.766670 kernel: audit: type=1334 audit(1768957068.760:19): prog-id=9 op=LOAD Jan 21 00:57:48.781505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 00:57:48.782734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 00:57:48.794791 kernel: audit: type=1130 audit(1768957068.781:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.819642 systemd-networkd[1413]: lo: Link UP Jan 21 00:57:48.821154 systemd-networkd[1413]: lo: Gained carrier Jan 21 00:57:48.823095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 00:57:48.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.825471 systemd[1]: Reached target network.target - Network. Jan 21 00:57:48.846604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:57:48.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.854606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 00:57:48.921123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:57:48.921968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:48.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:48.924516 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:48.934729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:48.975025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#250 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 21 00:57:48.975184 kernel: hv_vmbus: registering driver hv_netvsc Jan 21 00:57:48.985120 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd6457 (unnamed net_device) (uninitialized): VF slot 1 added Jan 21 00:57:48.992609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:48.996437 kernel: cryptd: max_cpu_qlen set to 1000 Jan 21 00:57:49.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:49.004632 systemd-networkd[1413]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:57:49.004639 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 00:57:49.007896 systemd-networkd[1413]: eth0: Link UP Jan 21 00:57:49.007991 systemd-networkd[1413]: eth0: Gained carrier Jan 21 00:57:49.008002 systemd-networkd[1413]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:57:49.043817 systemd-networkd[1413]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 21 00:57:49.061056 kernel: AES CTR mode by8 optimization enabled Jan 21 00:57:49.229789 kernel: nvme nvme0: using unchecked data buffer Jan 21 00:57:49.326743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 21 00:57:49.331630 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 00:57:49.457926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 21 00:57:49.493992 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 21 00:57:49.595798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 21 00:57:49.689057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 00:57:49.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:49.693146 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 00:57:49.694213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:57:49.701477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 00:57:49.707685 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 00:57:49.736705 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 00:57:49.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.006653 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 21 00:57:50.006922 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 21 00:57:50.009663 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 21 00:57:50.011228 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 21 00:57:50.015931 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 21 00:57:50.019889 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 21 00:57:50.025023 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 21 00:57:50.026878 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 21 00:57:50.043873 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 21 00:57:50.044060 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 21 00:57:50.047844 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 21 00:57:50.067324 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 21 00:57:50.077784 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 21 00:57:50.079784 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd6457 eth0: VF registering: eth1 Jan 21 00:57:50.079947 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 21 00:57:50.085787 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 21 00:57:50.085759 systemd-networkd[1413]: eth1: Interface name change detected, renamed to enP30832s1. Jan 21 00:57:50.190784 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 21 00:57:50.193943 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 21 00:57:50.205953 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd6457 eth0: Data path switched to VF: enP30832s1 Jan 21 00:57:50.206455 systemd-networkd[1413]: enP30832s1: Link UP Jan 21 00:57:50.207553 systemd-networkd[1413]: enP30832s1: Gained carrier Jan 21 00:57:50.683952 systemd-networkd[1413]: eth0: Gained IPv6LL Jan 21 00:57:50.705145 disk-uuid[1577]: Warning: The kernel is still using the old partition table. Jan 21 00:57:50.705145 disk-uuid[1577]: The new table will be used at the next reboot or after you Jan 21 00:57:50.705145 disk-uuid[1577]: run partprobe(8) or kpartx(8) Jan 21 00:57:50.705145 disk-uuid[1577]: The operation has completed successfully. Jan 21 00:57:50.711714 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 00:57:50.711826 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 00:57:50.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.720787 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 00:57:50.775837 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1623) Jan 21 00:57:50.775977 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:50.777508 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:57:50.817337 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 00:57:50.817377 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 21 00:57:50.818221 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 00:57:50.823802 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:50.824436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 00:57:50.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.828379 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 00:57:51.869727 ignition[1642]: Ignition 2.24.0 Jan 21 00:57:51.869739 ignition[1642]: Stage: fetch-offline Jan 21 00:57:51.872117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 00:57:51.869875 ignition[1642]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:51.869883 ignition[1642]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:51.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:51.882127 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 00:57:51.869970 ignition[1642]: parsed url from cmdline: "" Jan 21 00:57:51.869972 ignition[1642]: no config URL provided Jan 21 00:57:51.869977 ignition[1642]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 00:57:51.869983 ignition[1642]: no config at "/usr/lib/ignition/user.ign" Jan 21 00:57:51.869988 ignition[1642]: failed to fetch config: resource requires networking Jan 21 00:57:51.870138 ignition[1642]: Ignition finished successfully Jan 21 00:57:51.906731 ignition[1648]: Ignition 2.24.0 Jan 21 00:57:51.906741 ignition[1648]: Stage: fetch Jan 21 00:57:51.906991 ignition[1648]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:51.906999 ignition[1648]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:51.907076 ignition[1648]: parsed url from cmdline: "" Jan 21 00:57:51.907079 ignition[1648]: no config URL provided Jan 21 00:57:51.907084 ignition[1648]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 00:57:51.907089 ignition[1648]: no config at "/usr/lib/ignition/user.ign" Jan 21 00:57:51.907109 ignition[1648]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 21 00:57:51.980683 ignition[1648]: GET result: OK Jan 21 00:57:51.980751 ignition[1648]: config has been read from IMDS userdata Jan 21 00:57:51.980790 ignition[1648]: parsing config with SHA512: 01f5b52ce0bc37e2988756a96a7e5f127a02873973c7df48e09f21b1e579054fda9fcce18203baa92eedae32a0506590cadeab746b1763a7d0b1433f35a1d520 Jan 21 00:57:51.987329 unknown[1648]: fetched base config from "system" Jan 21 00:57:51.987339 unknown[1648]: fetched base config from "system" Jan 21 00:57:51.987699 ignition[1648]: fetch: fetch complete Jan 21 00:57:51.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:51.987344 unknown[1648]: fetched user config from "azure" Jan 21 00:57:51.987704 ignition[1648]: fetch: fetch passed Jan 21 00:57:51.990320 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 00:57:51.987741 ignition[1648]: Ignition finished successfully Jan 21 00:57:51.994946 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 00:57:52.022390 ignition[1655]: Ignition 2.24.0 Jan 21 00:57:52.022399 ignition[1655]: Stage: kargs Jan 21 00:57:52.022641 ignition[1655]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:52.022649 ignition[1655]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:52.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:52.026079 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 00:57:52.023564 ignition[1655]: kargs: kargs passed Jan 21 00:57:52.032828 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 00:57:52.023596 ignition[1655]: Ignition finished successfully Jan 21 00:57:52.055643 ignition[1661]: Ignition 2.24.0 Jan 21 00:57:52.055653 ignition[1661]: Stage: disks Jan 21 00:57:52.055910 ignition[1661]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:52.055918 ignition[1661]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:52.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:52.059445 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 00:57:52.056820 ignition[1661]: disks: disks passed Jan 21 00:57:52.061206 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 00:57:52.056855 ignition[1661]: Ignition finished successfully Jan 21 00:57:52.066335 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 00:57:52.068063 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 00:57:52.072306 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 00:57:52.076812 systemd[1]: Reached target basic.target - Basic System. Jan 21 00:57:52.080544 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 00:57:52.224390 systemd-fsck[1669]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 21 00:57:52.227451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 00:57:52.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:52.232764 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 00:57:52.526064 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cf9e7296-d0ad-4d9a-b030-d4e17a1c88bf r/w with ordered data mode. Quota mode: none. Jan 21 00:57:52.526624 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 00:57:52.529110 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 00:57:52.565982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 00:57:52.577853 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 00:57:52.588061 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1678) Jan 21 00:57:52.588099 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:52.589686 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:57:52.590888 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 21 00:57:52.593912 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 00:57:52.593945 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 00:57:52.604813 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 00:57:52.604835 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 21 00:57:52.604847 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 00:57:52.606396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 00:57:52.606973 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 00:57:52.609879 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 00:57:53.215953 coreos-metadata[1680]: Jan 21 00:57:53.215 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 21 00:57:53.219895 coreos-metadata[1680]: Jan 21 00:57:53.218 INFO Fetch successful Jan 21 00:57:53.219895 coreos-metadata[1680]: Jan 21 00:57:53.218 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 21 00:57:53.226861 coreos-metadata[1680]: Jan 21 00:57:53.225 INFO Fetch successful Jan 21 00:57:53.238710 coreos-metadata[1680]: Jan 21 00:57:53.238 INFO wrote hostname ci-4547.0.0-n-ed178c4493 to /sysroot/etc/hostname Jan 21 00:57:53.241720 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 21 00:57:53.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.525336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 00:57:54.533749 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 21 00:57:54.533790 kernel: audit: type=1130 audit(1768957074.527:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.531042 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 00:57:54.537839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 00:57:54.568479 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 00:57:54.570243 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:54.583081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 00:57:54.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.592817 kernel: audit: type=1130 audit(1768957074.584:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.596425 ignition[1783]: INFO : Ignition 2.24.0 Jan 21 00:57:54.596425 ignition[1783]: INFO : Stage: mount Jan 21 00:57:54.599196 ignition[1783]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:54.599196 ignition[1783]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:54.599196 ignition[1783]: INFO : mount: mount passed Jan 21 00:57:54.599196 ignition[1783]: INFO : Ignition finished successfully Jan 21 00:57:54.599037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 00:57:54.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.608263 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 00:57:54.612061 kernel: audit: type=1130 audit(1768957074.606:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:54.625965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 00:57:54.651819 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1793) Jan 21 00:57:54.652061 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:54.654051 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:57:54.660113 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 21 00:57:54.660155 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 21 00:57:54.661527 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 21 00:57:54.663398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 00:57:54.688377 ignition[1809]: INFO : Ignition 2.24.0 Jan 21 00:57:54.688377 ignition[1809]: INFO : Stage: files Jan 21 00:57:54.691820 ignition[1809]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:54.691820 ignition[1809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:54.691820 ignition[1809]: DEBUG : files: compiled without relabeling support, skipping Jan 21 00:57:54.691820 ignition[1809]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 00:57:54.691820 ignition[1809]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 00:57:54.756657 ignition[1809]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 00:57:54.759860 ignition[1809]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 00:57:54.759860 ignition[1809]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 00:57:54.758762 unknown[1809]: wrote ssh authorized keys file for user: core Jan 21 00:57:54.768812 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 00:57:54.768812 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 21 00:57:54.800733 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 00:57:54.845256 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 00:57:54.849846 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:54.873637 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 21 00:57:55.419332 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 00:57:56.510028 ignition[1809]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:56.510028 ignition[1809]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 00:57:56.538772 ignition[1809]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 00:57:56.546595 ignition[1809]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 00:57:56.546595 ignition[1809]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 00:57:56.546595 ignition[1809]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 00:57:56.556418 ignition[1809]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 00:57:56.556418 ignition[1809]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 00:57:56.556418 ignition[1809]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 00:57:56.556418 ignition[1809]: INFO : files: files passed Jan 21 00:57:56.556418 ignition[1809]: INFO : Ignition finished successfully Jan 21 00:57:56.575055 kernel: audit: type=1130 audit(1768957076.555:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.550876 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 00:57:56.561937 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 00:57:56.570893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 00:57:56.588967 kernel: audit: type=1130 audit(1768957076.580:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.588999 kernel: audit: type=1131 audit(1768957076.580:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.581259 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 00:57:56.581356 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 00:57:56.598927 initrd-setup-root-after-ignition[1842]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:56.598927 initrd-setup-root-after-ignition[1842]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:56.602355 initrd-setup-root-after-ignition[1846]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:56.601756 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 00:57:56.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.609223 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 00:57:56.618889 kernel: audit: type=1130 audit(1768957076.607:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.613751 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 00:57:56.660063 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 00:57:56.660151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 00:57:56.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.665384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 00:57:56.671070 kernel: audit: type=1130 audit(1768957076.663:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.671090 kernel: audit: type=1131 audit(1768957076.663:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.673834 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 00:57:56.676518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 00:57:56.677131 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 00:57:56.695400 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 00:57:56.701817 kernel: audit: type=1130 audit(1768957076.695:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.701901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 00:57:56.720066 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 00:57:56.720271 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:57:56.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.724952 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:57:56.725175 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 00:57:56.725444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 00:57:56.725560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 00:57:56.726018 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 00:57:56.726311 systemd[1]: Stopped target basic.target - Basic System. Jan 21 00:57:56.736227 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 00:57:56.740260 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 00:57:56.743043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 00:57:56.752897 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 00:57:56.756890 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 00:57:56.758554 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 00:57:56.763516 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 00:57:56.767883 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 00:57:56.769243 systemd[1]: Stopped target swap.target - Swaps. Jan 21 00:57:56.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.772894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 00:57:56.773026 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 00:57:56.773682 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:57:56.774268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:57:56.774503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 00:57:56.778217 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:57:56.789623 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 00:57:56.790790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 00:57:56.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.793480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 00:57:56.793608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 00:57:56.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.797964 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 00:57:56.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.798088 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 00:57:56.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.802962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 21 00:57:56.803089 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 21 00:57:56.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.808976 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 00:57:56.813841 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 00:57:56.814013 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:57:56.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.817522 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 00:57:56.823838 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 00:57:56.824001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:57:56.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.826938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 00:57:56.827047 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:57:56.839763 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 00:57:56.839880 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 00:57:56.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.855258 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 00:57:56.856610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 00:57:56.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.866907 ignition[1866]: INFO : Ignition 2.24.0 Jan 21 00:57:56.866907 ignition[1866]: INFO : Stage: umount Jan 21 00:57:56.866907 ignition[1866]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:56.866907 ignition[1866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 21 00:57:56.866907 ignition[1866]: INFO : umount: umount passed Jan 21 00:57:56.866907 ignition[1866]: INFO : Ignition finished successfully Jan 21 00:57:56.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.860381 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 00:57:56.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.860455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 00:57:56.872715 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 00:57:56.872810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 00:57:56.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.875803 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 00:57:56.875842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 00:57:56.880304 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 00:57:56.880353 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 00:57:56.884232 systemd[1]: Stopped target network.target - Network. Jan 21 00:57:56.887413 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 00:57:56.887464 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 00:57:56.892957 systemd[1]: Stopped target paths.target - Path Units. Jan 21 00:57:56.895486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 00:57:56.896130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:57:56.903879 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 00:57:56.908950 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 00:57:56.917242 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 00:57:56.917284 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 00:57:56.921828 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 00:57:56.921862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 00:57:56.924917 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 00:57:56.924943 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:57:56.927576 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 00:57:56.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.928602 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 00:57:56.932070 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 00:57:56.932105 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 00:57:56.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.938397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 00:57:56.939157 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 00:57:56.945038 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 00:57:56.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.945123 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 00:57:56.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.953000 audit: BPF prog-id=9 op=UNLOAD Jan 21 00:57:56.949341 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 00:57:56.954000 audit: BPF prog-id=6 op=UNLOAD Jan 21 00:57:56.949414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 00:57:56.954013 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 00:57:56.956694 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 00:57:56.956726 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:57:56.961152 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 00:57:56.967834 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 00:57:56.967891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 00:57:56.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.977989 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 00:57:56.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.978041 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:57:56.979973 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 00:57:56.980013 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 00:57:56.984457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:57:56.997541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 00:57:57.000931 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:57:57.011852 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd6457 eth0: Data path switched from VF: enP30832s1 Jan 21 00:57:57.012060 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 21 00:57:57.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.006035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 00:57:57.006090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 00:57:57.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.009408 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 00:57:57.009445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:57:57.014831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 00:57:57.014896 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 00:57:57.024807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 00:57:57.024859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 00:57:57.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.029799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 00:57:57.029849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 00:57:57.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.035475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 00:57:57.037834 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 00:57:57.040634 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:57:57.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.043627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 00:57:57.043679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:57:57.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.052254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:57:57.053286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:57.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.056219 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 00:57:57.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.057824 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 00:57:57.063674 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 00:57:57.065325 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 00:57:57.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.103175 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 00:57:57.594270 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 00:57:57.594380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 00:57:57.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.599139 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 00:57:57.604425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 00:57:57.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:57.604518 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 00:57:57.608915 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 00:57:57.625118 systemd[1]: Switching root. Jan 21 00:57:57.682665 systemd-journald[1014]: Journal stopped Jan 21 00:58:02.201552 systemd-journald[1014]: Received SIGTERM from PID 1 (systemd). Jan 21 00:58:02.201579 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 00:58:02.201593 kernel: SELinux: policy capability open_perms=1 Jan 21 00:58:02.201604 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 00:58:02.201613 kernel: SELinux: policy capability always_check_network=0 Jan 21 00:58:02.201623 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 00:58:02.201633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 00:58:02.201643 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 00:58:02.201653 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 00:58:02.201662 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 00:58:02.201672 systemd[1]: Successfully loaded SELinux policy in 179.995ms. Jan 21 00:58:02.201684 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.410ms. Jan 21 00:58:02.201696 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 00:58:02.201709 systemd[1]: Detected virtualization microsoft. Jan 21 00:58:02.201720 systemd[1]: Detected architecture x86-64. Jan 21 00:58:02.201730 systemd[1]: Detected first boot. Jan 21 00:58:02.201740 systemd[1]: Hostname set to . Jan 21 00:58:02.201751 systemd[1]: Initializing machine ID from random generator. Jan 21 00:58:02.201761 zram_generator::config[1910]: No configuration found. Jan 21 00:58:02.201810 kernel: Guest personality initialized and is inactive Jan 21 00:58:02.201820 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 21 00:58:02.201829 kernel: Initialized host personality Jan 21 00:58:02.201838 kernel: NET: Registered PF_VSOCK protocol family Jan 21 00:58:02.201848 systemd[1]: Populated /etc with preset unit settings. Jan 21 00:58:02.201861 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 21 00:58:02.201872 kernel: audit: type=1334 audit(1768957081.651:90): prog-id=12 op=LOAD Jan 21 00:58:02.201881 kernel: audit: type=1334 audit(1768957081.651:91): prog-id=3 op=UNLOAD Jan 21 00:58:02.201890 kernel: audit: type=1334 audit(1768957081.651:92): prog-id=13 op=LOAD Jan 21 00:58:02.201900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 00:58:02.201909 kernel: audit: type=1334 audit(1768957081.651:93): prog-id=14 op=LOAD Jan 21 00:58:02.201921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 00:58:02.201931 kernel: audit: type=1334 audit(1768957081.651:94): prog-id=4 op=UNLOAD Jan 21 00:58:02.201941 kernel: audit: type=1334 audit(1768957081.651:95): prog-id=5 op=UNLOAD Jan 21 00:58:02.201952 kernel: audit: type=1131 audit(1768957081.653:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.201961 kernel: audit: type=1334 audit(1768957081.663:97): prog-id=12 op=UNLOAD Jan 21 00:58:02.201971 kernel: audit: type=1130 audit(1768957081.670:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.201982 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 00:58:02.201993 kernel: audit: type=1131 audit(1768957081.670:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.202007 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 00:58:02.202018 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 00:58:02.202033 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 00:58:02.202043 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 00:58:02.202055 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 00:58:02.202065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 00:58:02.202076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 00:58:02.202087 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 00:58:02.202098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:58:02.202110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:58:02.202122 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 00:58:02.202131 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 00:58:02.202142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 00:58:02.202153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 00:58:02.202164 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 00:58:02.202175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:58:02.202188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:58:02.202198 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 00:58:02.202209 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 00:58:02.202218 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 00:58:02.202229 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 00:58:02.202240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:58:02.202251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 00:58:02.202264 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 00:58:02.202274 systemd[1]: Reached target slices.target - Slice Units. Jan 21 00:58:02.202284 systemd[1]: Reached target swap.target - Swaps. Jan 21 00:58:02.202294 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 00:58:02.202304 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 00:58:02.202318 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 00:58:02.202330 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:58:02.202341 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 00:58:02.202351 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:58:02.202362 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 00:58:02.202374 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 00:58:02.202384 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 00:58:02.202395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:58:02.202407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 00:58:02.202418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 00:58:02.202428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 00:58:02.202438 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 00:58:02.202450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:02.202461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 00:58:02.202472 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 00:58:02.202483 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 00:58:02.202495 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 00:58:02.202506 systemd[1]: Reached target machines.target - Containers. Jan 21 00:58:02.202517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 00:58:02.202531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:58:02.202545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 00:58:02.202561 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 00:58:02.202574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:58:02.202588 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 00:58:02.202602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:58:02.202618 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 00:58:02.202629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:58:02.202645 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 00:58:02.202658 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 00:58:02.202672 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 00:58:02.202687 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 00:58:02.202703 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 00:58:02.202724 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:58:02.202739 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 00:58:02.202750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 00:58:02.202763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 00:58:02.202788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 00:58:02.202803 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 00:58:02.202818 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 00:58:02.202831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:02.202846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 00:58:02.202859 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 00:58:02.202875 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 00:58:02.202889 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 00:58:02.202902 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 00:58:02.202918 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 00:58:02.202933 kernel: fuse: init (API version 7.41) Jan 21 00:58:02.202945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:58:02.202959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 00:58:02.202972 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 00:58:02.202986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:58:02.202999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:58:02.203020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:58:02.203032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:58:02.203046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 00:58:02.203058 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 00:58:02.203071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:58:02.203087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:58:02.203116 systemd-journald[1993]: Collecting audit messages is enabled. Jan 21 00:58:02.203145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 00:58:02.203157 systemd-journald[1993]: Journal started Jan 21 00:58:02.203179 systemd-journald[1993]: Runtime Journal (/run/log/journal/d1e580d8b5e04c6697333680e8a27bb4) is 8M, max 158.5M, 150.5M free. Jan 21 00:58:01.791000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 00:58:02.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.056000 audit: BPF prog-id=14 op=UNLOAD Jan 21 00:58:02.056000 audit: BPF prog-id=13 op=UNLOAD Jan 21 00:58:02.057000 audit: BPF prog-id=15 op=LOAD Jan 21 00:58:02.057000 audit: BPF prog-id=16 op=LOAD Jan 21 00:58:02.057000 audit: BPF prog-id=17 op=LOAD Jan 21 00:58:02.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.197000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 00:58:02.197000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff8d5e5a40 a2=4000 a3=0 items=0 ppid=1 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:02.197000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 00:58:02.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:01.644826 systemd[1]: Queued start job for default target multi-user.target. Jan 21 00:58:01.652883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 21 00:58:01.654306 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 00:58:02.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.207866 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 00:58:02.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.211849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:58:02.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.216747 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 00:58:02.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.225526 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 00:58:02.230046 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 00:58:02.236915 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 00:58:02.242258 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 00:58:02.246857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 00:58:02.246898 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 00:58:02.250280 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 00:58:02.253996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:58:02.254093 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:58:02.255124 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 00:58:02.259749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 00:58:02.263904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 00:58:02.265909 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 00:58:02.269931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 00:58:02.274081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 00:58:02.279314 kernel: ACPI: bus type drm_connector registered Jan 21 00:58:02.278894 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 00:58:02.288953 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 00:58:02.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.291640 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 00:58:02.291889 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 00:58:02.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.294861 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 00:58:02.298597 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 00:58:02.301858 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 00:58:02.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.311031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 00:58:02.317891 systemd-journald[1993]: Time spent on flushing to /var/log/journal/d1e580d8b5e04c6697333680e8a27bb4 is 14.734ms for 1126 entries. Jan 21 00:58:02.317891 systemd-journald[1993]: System Journal (/var/log/journal/d1e580d8b5e04c6697333680e8a27bb4) is 8M, max 2.2G, 2.2G free. Jan 21 00:58:02.358764 systemd-journald[1993]: Received client request to flush runtime journal. Jan 21 00:58:02.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.318023 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 00:58:02.323142 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 00:58:02.329942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 00:58:02.346566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:58:02.359511 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 00:58:02.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.371787 kernel: loop1: detected capacity change from 0 to 111560 Jan 21 00:58:02.406894 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 00:58:02.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.415083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:58:02.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.523853 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 00:58:02.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.525000 audit: BPF prog-id=18 op=LOAD Jan 21 00:58:02.525000 audit: BPF prog-id=19 op=LOAD Jan 21 00:58:02.525000 audit: BPF prog-id=20 op=LOAD Jan 21 00:58:02.527024 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 00:58:02.530000 audit: BPF prog-id=21 op=LOAD Jan 21 00:58:02.532906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 00:58:02.536003 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 00:58:02.540000 audit: BPF prog-id=22 op=LOAD Jan 21 00:58:02.540000 audit: BPF prog-id=23 op=LOAD Jan 21 00:58:02.540000 audit: BPF prog-id=24 op=LOAD Jan 21 00:58:02.544000 audit: BPF prog-id=25 op=LOAD Jan 21 00:58:02.544000 audit: BPF prog-id=26 op=LOAD Jan 21 00:58:02.545000 audit: BPF prog-id=27 op=LOAD Jan 21 00:58:02.542939 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 00:58:02.546954 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 00:58:02.603991 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 00:58:02.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.608934 systemd-nsresourced[2071]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 00:58:02.609324 systemd-tmpfiles[2070]: ACLs are not supported, ignoring. Jan 21 00:58:02.609336 systemd-tmpfiles[2070]: ACLs are not supported, ignoring. Jan 21 00:58:02.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.610535 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 00:58:02.617411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:58:02.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.658021 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 00:58:02.697098 systemd-oomd[2068]: No swap; memory pressure usage will be degraded Jan 21 00:58:02.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.697850 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 00:58:02.720677 systemd-resolved[2069]: Positive Trust Anchors: Jan 21 00:58:02.720807 systemd-resolved[2069]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 00:58:02.720811 systemd-resolved[2069]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 00:58:02.720852 systemd-resolved[2069]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 00:58:02.808116 kernel: loop2: detected capacity change from 0 to 229808 Jan 21 00:58:02.819867 systemd-resolved[2069]: Using system hostname 'ci-4547.0.0-n-ed178c4493'. Jan 21 00:58:02.820852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 00:58:02.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:02.823960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:58:02.884789 kernel: loop3: detected capacity change from 0 to 27728 Jan 21 00:58:03.113712 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 00:58:03.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.116000 audit: BPF prog-id=8 op=UNLOAD Jan 21 00:58:03.116000 audit: BPF prog-id=7 op=UNLOAD Jan 21 00:58:03.116000 audit: BPF prog-id=28 op=LOAD Jan 21 00:58:03.116000 audit: BPF prog-id=29 op=LOAD Jan 21 00:58:03.118276 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:58:03.152957 systemd-udevd[2094]: Using default interface naming scheme 'v257'. Jan 21 00:58:03.354267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:58:03.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.359000 audit: BPF prog-id=30 op=LOAD Jan 21 00:58:03.363915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 00:58:03.410593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 00:58:03.433845 kernel: loop4: detected capacity change from 0 to 50784 Jan 21 00:58:03.479453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 21 00:58:03.484795 kernel: hv_vmbus: registering driver hyperv_fb Jan 21 00:58:03.488744 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 00:58:03.488828 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 21 00:58:03.487177 systemd-networkd[2103]: lo: Link UP Jan 21 00:58:03.487185 systemd-networkd[2103]: lo: Gained carrier Jan 21 00:58:03.489509 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 00:58:03.489738 systemd-networkd[2103]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:58:03.489829 systemd-networkd[2103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 00:58:03.492790 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 21 00:58:03.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.494219 systemd[1]: Reached target network.target - Network. Jan 21 00:58:03.495786 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 21 00:58:03.499157 kernel: Console: switching to colour dummy device 80x25 Jan 21 00:58:03.499585 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 00:58:03.503792 kernel: Console: switching to colour frame buffer device 128x48 Jan 21 00:58:03.505071 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 00:58:03.510836 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 21 00:58:03.514263 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd6457 eth0: Data path switched to VF: enP30832s1 Jan 21 00:58:03.520112 systemd-networkd[2103]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:58:03.520167 systemd-networkd[2103]: enP30832s1: Link UP Jan 21 00:58:03.520410 systemd-networkd[2103]: eth0: Link UP Jan 21 00:58:03.520419 systemd-networkd[2103]: eth0: Gained carrier Jan 21 00:58:03.520432 systemd-networkd[2103]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:58:03.525230 systemd-networkd[2103]: enP30832s1: Gained carrier Jan 21 00:58:03.533861 systemd-networkd[2103]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 21 00:58:03.543820 kernel: hv_vmbus: registering driver hv_balloon Jan 21 00:58:03.569796 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 21 00:58:03.590820 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 00:58:03.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.646084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:58:03.662095 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:58:03.662316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:58:03.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.668937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:58:03.683304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:58:03.683816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:58:03.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.688164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:58:03.834794 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 21 00:58:03.846120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 21 00:58:03.848921 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 00:58:03.911789 kernel: loop5: detected capacity change from 0 to 111560 Jan 21 00:58:03.912328 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 00:58:03.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:03.928805 kernel: loop6: detected capacity change from 0 to 229808 Jan 21 00:58:03.943905 kernel: loop7: detected capacity change from 0 to 27728 Jan 21 00:58:03.958798 kernel: loop1: detected capacity change from 0 to 50784 Jan 21 00:58:04.016457 (sd-merge)[2179]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 21 00:58:04.019545 (sd-merge)[2179]: Merged extensions into '/usr'. Jan 21 00:58:04.023213 systemd[1]: Reload requested from client PID 2049 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 00:58:04.023226 systemd[1]: Reloading... Jan 21 00:58:04.072797 zram_generator::config[2220]: No configuration found. Jan 21 00:58:04.268484 systemd[1]: Reloading finished in 244 ms. Jan 21 00:58:04.298142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 00:58:04.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.300228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:58:04.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.310587 systemd[1]: Starting ensure-sysext.service... Jan 21 00:58:04.314938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 00:58:04.316000 audit: BPF prog-id=31 op=LOAD Jan 21 00:58:04.325000 audit: BPF prog-id=32 op=LOAD Jan 21 00:58:04.325000 audit: BPF prog-id=28 op=UNLOAD Jan 21 00:58:04.325000 audit: BPF prog-id=29 op=UNLOAD Jan 21 00:58:04.327000 audit: BPF prog-id=33 op=LOAD Jan 21 00:58:04.327000 audit: BPF prog-id=30 op=UNLOAD Jan 21 00:58:04.328000 audit: BPF prog-id=34 op=LOAD Jan 21 00:58:04.328000 audit: BPF prog-id=25 op=UNLOAD Jan 21 00:58:04.328000 audit: BPF prog-id=35 op=LOAD Jan 21 00:58:04.328000 audit: BPF prog-id=36 op=LOAD Jan 21 00:58:04.328000 audit: BPF prog-id=26 op=UNLOAD Jan 21 00:58:04.328000 audit: BPF prog-id=27 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=37 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=15 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=38 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=39 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=16 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=17 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=40 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=18 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=41 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=42 op=LOAD Jan 21 00:58:04.330000 audit: BPF prog-id=19 op=UNLOAD Jan 21 00:58:04.330000 audit: BPF prog-id=20 op=UNLOAD Jan 21 00:58:04.331000 audit: BPF prog-id=43 op=LOAD Jan 21 00:58:04.331000 audit: BPF prog-id=22 op=UNLOAD Jan 21 00:58:04.331000 audit: BPF prog-id=44 op=LOAD Jan 21 00:58:04.331000 audit: BPF prog-id=45 op=LOAD Jan 21 00:58:04.331000 audit: BPF prog-id=23 op=UNLOAD Jan 21 00:58:04.331000 audit: BPF prog-id=24 op=UNLOAD Jan 21 00:58:04.332000 audit: BPF prog-id=46 op=LOAD Jan 21 00:58:04.332000 audit: BPF prog-id=21 op=UNLOAD Jan 21 00:58:04.338560 systemd[1]: Reload requested from client PID 2273 ('systemctl') (unit ensure-sysext.service)... Jan 21 00:58:04.338579 systemd[1]: Reloading... Jan 21 00:58:04.339692 systemd-tmpfiles[2274]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 00:58:04.339970 systemd-tmpfiles[2274]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 00:58:04.340239 systemd-tmpfiles[2274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 00:58:04.341374 systemd-tmpfiles[2274]: ACLs are not supported, ignoring. Jan 21 00:58:04.341501 systemd-tmpfiles[2274]: ACLs are not supported, ignoring. Jan 21 00:58:04.397833 systemd-tmpfiles[2274]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 00:58:04.397937 systemd-tmpfiles[2274]: Skipping /boot Jan 21 00:58:04.401610 zram_generator::config[2308]: No configuration found. Jan 21 00:58:04.411009 systemd-tmpfiles[2274]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 00:58:04.411018 systemd-tmpfiles[2274]: Skipping /boot Jan 21 00:58:04.583966 systemd[1]: Reloading finished in 245 ms. Jan 21 00:58:04.597000 audit: BPF prog-id=47 op=LOAD Jan 21 00:58:04.597000 audit: BPF prog-id=37 op=UNLOAD Jan 21 00:58:04.597000 audit: BPF prog-id=48 op=LOAD Jan 21 00:58:04.597000 audit: BPF prog-id=49 op=LOAD Jan 21 00:58:04.597000 audit: BPF prog-id=38 op=UNLOAD Jan 21 00:58:04.597000 audit: BPF prog-id=39 op=UNLOAD Jan 21 00:58:04.598000 audit: BPF prog-id=50 op=LOAD Jan 21 00:58:04.598000 audit: BPF prog-id=34 op=UNLOAD Jan 21 00:58:04.599000 audit: BPF prog-id=51 op=LOAD Jan 21 00:58:04.599000 audit: BPF prog-id=52 op=LOAD Jan 21 00:58:04.599000 audit: BPF prog-id=35 op=UNLOAD Jan 21 00:58:04.599000 audit: BPF prog-id=36 op=UNLOAD Jan 21 00:58:04.599000 audit: BPF prog-id=53 op=LOAD Jan 21 00:58:04.599000 audit: BPF prog-id=33 op=UNLOAD Jan 21 00:58:04.600000 audit: BPF prog-id=54 op=LOAD Jan 21 00:58:04.600000 audit: BPF prog-id=55 op=LOAD Jan 21 00:58:04.600000 audit: BPF prog-id=31 op=UNLOAD Jan 21 00:58:04.600000 audit: BPF prog-id=32 op=UNLOAD Jan 21 00:58:04.600000 audit: BPF prog-id=56 op=LOAD Jan 21 00:58:04.600000 audit: BPF prog-id=40 op=UNLOAD Jan 21 00:58:04.600000 audit: BPF prog-id=57 op=LOAD Jan 21 00:58:04.600000 audit: BPF prog-id=58 op=LOAD Jan 21 00:58:04.600000 audit: BPF prog-id=41 op=UNLOAD Jan 21 00:58:04.600000 audit: BPF prog-id=42 op=UNLOAD Jan 21 00:58:04.601000 audit: BPF prog-id=59 op=LOAD Jan 21 00:58:04.610000 audit: BPF prog-id=46 op=UNLOAD Jan 21 00:58:04.610000 audit: BPF prog-id=60 op=LOAD Jan 21 00:58:04.610000 audit: BPF prog-id=43 op=UNLOAD Jan 21 00:58:04.610000 audit: BPF prog-id=61 op=LOAD Jan 21 00:58:04.610000 audit: BPF prog-id=62 op=LOAD Jan 21 00:58:04.610000 audit: BPF prog-id=44 op=UNLOAD Jan 21 00:58:04.610000 audit: BPF prog-id=45 op=UNLOAD Jan 21 00:58:04.613533 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:58:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.624663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 00:58:04.633109 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 00:58:04.637002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 00:58:04.642391 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 00:58:04.648234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 00:58:04.656642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.656975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:58:04.657000 audit[2372]: SYSTEM_BOOT pid=2372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.659002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:58:04.664004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:58:04.668965 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:58:04.670908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:58:04.671088 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:58:04.671179 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:58:04.671264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.672360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:58:04.672567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:58:04.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.676754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:58:04.677312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:58:04.680152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:58:04.680334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:58:04.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.691130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.691747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:58:04.692669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:58:04.697000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:58:04.701721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:58:04.703869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:58:04.704016 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:58:04.704103 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:58:04.704185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.705738 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 00:58:04.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.710393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:58:04.710584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:58:04.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.714470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:58:04.714711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:58:04.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.716955 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:58:04.717125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:58:04.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.724317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.724523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:58:04.725446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:58:04.729993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 00:58:04.736970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:58:04.741059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:58:04.745013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:58:04.745954 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:58:04.746055 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:58:04.746210 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 00:58:04.747987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:58:04.750022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:58:04.750339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:58:04.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.752346 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 00:58:04.752787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 00:58:04.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.758867 systemd[1]: Finished ensure-sysext.service. Jan 21 00:58:04.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.760354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:58:04.760536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:58:04.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.763420 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:58:04.763629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:58:04.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.767499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 00:58:04.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.772565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 00:58:04.772643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 00:58:04.955892 systemd-networkd[2103]: eth0: Gained IPv6LL Jan 21 00:58:04.957816 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 00:58:04.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:04.961057 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 00:58:05.123000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 00:58:05.123000 audit[2416]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5a1b8bf0 a2=420 a3=0 items=0 ppid=2368 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:05.123000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:58:05.124220 augenrules[2416]: No rules Jan 21 00:58:05.125011 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 00:58:05.125260 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 00:58:05.911969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 00:58:05.915072 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 00:58:10.860490 ldconfig[2370]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 00:58:10.870071 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 00:58:10.874984 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 00:58:10.896531 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 00:58:10.899009 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 00:58:10.900213 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 00:58:10.901520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 00:58:10.903812 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 21 00:58:10.906927 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 00:58:10.909867 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 00:58:10.912819 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 00:58:10.915870 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 00:58:10.918817 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 00:58:10.921825 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 00:58:10.921855 systemd[1]: Reached target paths.target - Path Units. Jan 21 00:58:10.922779 systemd[1]: Reached target timers.target - Timer Units. Jan 21 00:58:10.924848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 00:58:10.928733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 00:58:10.932357 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 00:58:10.935934 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 00:58:10.938823 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 00:58:10.941664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 00:58:10.945107 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 00:58:10.948329 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 00:58:10.951479 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 00:58:10.953822 systemd[1]: Reached target basic.target - Basic System. Jan 21 00:58:10.954935 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 00:58:10.954960 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 00:58:10.956726 systemd[1]: Starting chronyd.service - NTP client/server... Jan 21 00:58:10.959678 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 00:58:10.962953 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 00:58:10.966900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 00:58:10.971931 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 00:58:10.979544 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 00:58:10.984933 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 00:58:10.987880 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 00:58:10.990261 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 21 00:58:10.992025 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 21 00:58:10.996043 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 21 00:58:10.999938 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 21 00:58:11.001903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:58:11.004052 jq[2433]: false Jan 21 00:58:11.008872 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 00:58:11.011306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 00:58:11.016891 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 00:58:11.020876 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 00:58:11.025687 KVP[2439]: KVP starting; pid is:2439 Jan 21 00:58:11.027948 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 00:58:11.035821 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Refreshing passwd entry cache Jan 21 00:58:11.033790 oslogin_cache_refresh[2438]: Refreshing passwd entry cache Jan 21 00:58:11.038434 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 00:58:11.041889 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 00:58:11.042921 extend-filesystems[2437]: Found /dev/nvme0n1p6 Jan 21 00:58:11.047591 kernel: hv_utils: KVP IC version 4.0 Jan 21 00:58:11.045698 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 00:58:11.045178 KVP[2439]: KVP LIC Version: 3.1 Jan 21 00:58:11.049825 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 00:58:11.053011 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 00:58:11.060127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 00:58:11.062580 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 00:58:11.062979 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 00:58:11.067352 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Failure getting users, quitting Jan 21 00:58:11.067352 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 00:58:11.067352 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Refreshing group entry cache Jan 21 00:58:11.066946 oslogin_cache_refresh[2438]: Failure getting users, quitting Jan 21 00:58:11.066963 oslogin_cache_refresh[2438]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 00:58:11.067007 oslogin_cache_refresh[2438]: Refreshing group entry cache Jan 21 00:58:11.073127 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 00:58:11.073338 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 00:58:11.087349 oslogin_cache_refresh[2438]: Failure getting groups, quitting Jan 21 00:58:11.088008 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Failure getting groups, quitting Jan 21 00:58:11.088008 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 00:58:11.087359 oslogin_cache_refresh[2438]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 00:58:11.089703 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 21 00:58:11.090861 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 21 00:58:11.100155 extend-filesystems[2437]: Found /dev/nvme0n1p9 Jan 21 00:58:11.103940 jq[2454]: true Jan 21 00:58:11.102051 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 00:58:11.102824 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 00:58:11.111904 extend-filesystems[2437]: Checking size of /dev/nvme0n1p9 Jan 21 00:58:11.126428 jq[2483]: true Jan 21 00:58:11.151863 extend-filesystems[2437]: Resized partition /dev/nvme0n1p9 Jan 21 00:58:11.170797 extend-filesystems[2495]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 00:58:11.185816 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 21 00:58:11.185868 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 21 00:58:11.185883 update_engine[2453]: I20260121 00:58:11.184501 2453 main.cc:92] Flatcar Update Engine starting Jan 21 00:58:11.228403 systemd-logind[2449]: New seat seat0. Jan 21 00:58:11.193989 chronyd[2428]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 21 00:58:11.243367 systemd-logind[2449]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 21 00:58:11.244204 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 00:58:11.258580 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 00:58:11.262365 chronyd[2428]: Timezone right/UTC failed leap second check, ignoring Jan 21 00:58:11.262671 chronyd[2428]: Loaded seccomp filter (level 2) Jan 21 00:58:11.263163 systemd[1]: Started chronyd.service - NTP client/server. Jan 21 00:58:11.296806 extend-filesystems[2495]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 21 00:58:11.296806 extend-filesystems[2495]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 21 00:58:11.296806 extend-filesystems[2495]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 21 00:58:11.305131 extend-filesystems[2437]: Resized filesystem in /dev/nvme0n1p9 Jan 21 00:58:11.301863 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 00:58:11.302094 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 00:58:11.324661 bash[2510]: Updated "/home/core/.ssh/authorized_keys" Jan 21 00:58:11.326224 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 00:58:11.330158 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 21 00:58:11.331169 dbus-daemon[2431]: [system] SELinux support is enabled Jan 21 00:58:11.331600 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 00:58:11.337832 update_engine[2453]: I20260121 00:58:11.337795 2453 update_check_scheduler.cc:74] Next update check in 10m55s Jan 21 00:58:11.339509 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 00:58:11.339536 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 00:58:11.343707 dbus-daemon[2431]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 21 00:58:11.343889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 00:58:11.343907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 00:58:11.347184 systemd[1]: Started update-engine.service - Update Engine. Jan 21 00:58:11.354925 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 00:58:11.359659 tar[2458]: linux-amd64/LICENSE Jan 21 00:58:11.359659 tar[2458]: linux-amd64/helm Jan 21 00:58:11.416850 coreos-metadata[2430]: Jan 21 00:58:11.415 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 21 00:58:11.430777 coreos-metadata[2430]: Jan 21 00:58:11.427 INFO Fetch successful Jan 21 00:58:11.430777 coreos-metadata[2430]: Jan 21 00:58:11.427 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 21 00:58:11.434357 coreos-metadata[2430]: Jan 21 00:58:11.434 INFO Fetch successful Jan 21 00:58:11.434357 coreos-metadata[2430]: Jan 21 00:58:11.434 INFO Fetching http://168.63.129.16/machine/f0ee018b-47a2-49cb-a952-9c9525b9569e/d5db3017%2D59ef%2D4b7a%2Dac1e%2De00c8f2bf02f.%5Fci%2D4547.0.0%2Dn%2Ded178c4493?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 21 00:58:11.436383 coreos-metadata[2430]: Jan 21 00:58:11.436 INFO Fetch successful Jan 21 00:58:11.436383 coreos-metadata[2430]: Jan 21 00:58:11.436 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 21 00:58:11.453929 coreos-metadata[2430]: Jan 21 00:58:11.452 INFO Fetch successful Jan 21 00:58:11.507327 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 00:58:11.511151 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 00:58:11.621882 sshd_keygen[2479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 00:58:11.657267 locksmithd[2540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 00:58:11.657720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 00:58:11.670848 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 00:58:11.680427 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 21 00:58:11.701686 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 00:58:11.704917 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 00:58:11.711739 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 00:58:11.750380 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 21 00:58:11.753071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 00:58:11.766007 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 00:58:11.770036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 00:58:11.771892 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 00:58:11.977129 tar[2458]: linux-amd64/README.md Jan 21 00:58:11.992242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 00:58:12.368715 containerd[2473]: time="2026-01-21T00:58:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 00:58:12.369243 containerd[2473]: time="2026-01-21T00:58:12.369173918Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 00:58:12.381254 containerd[2473]: time="2026-01-21T00:58:12.381080797Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.518µs" Jan 21 00:58:12.381254 containerd[2473]: time="2026-01-21T00:58:12.381113713Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 00:58:12.381254 containerd[2473]: time="2026-01-21T00:58:12.381150845Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 00:58:12.381254 containerd[2473]: time="2026-01-21T00:58:12.381163466Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 00:58:12.381407 containerd[2473]: time="2026-01-21T00:58:12.381326432Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 00:58:12.381407 containerd[2473]: time="2026-01-21T00:58:12.381344142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381467 containerd[2473]: time="2026-01-21T00:58:12.381410929Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381467 containerd[2473]: time="2026-01-21T00:58:12.381422715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381629 containerd[2473]: time="2026-01-21T00:58:12.381602943Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381629 containerd[2473]: time="2026-01-21T00:58:12.381617895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381680 containerd[2473]: time="2026-01-21T00:58:12.381627796Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381680 containerd[2473]: time="2026-01-21T00:58:12.381635745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381805 containerd[2473]: time="2026-01-21T00:58:12.381765548Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.381833 containerd[2473]: time="2026-01-21T00:58:12.381810372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.381876294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.382347523Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.382386063Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.382397941Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.382427911Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 00:58:12.382619 containerd[2473]: time="2026-01-21T00:58:12.382622946Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 00:58:12.382829 containerd[2473]: time="2026-01-21T00:58:12.382689134Z" level=info msg="metadata content store policy set" policy=shared Jan 21 00:58:12.393746 containerd[2473]: time="2026-01-21T00:58:12.393710503Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 00:58:12.393921 containerd[2473]: time="2026-01-21T00:58:12.393875745Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 00:58:12.394059 containerd[2473]: time="2026-01-21T00:58:12.394043369Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 00:58:12.394105 containerd[2473]: time="2026-01-21T00:58:12.394097036Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 00:58:12.394148 containerd[2473]: time="2026-01-21T00:58:12.394139787Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 00:58:12.394185 containerd[2473]: time="2026-01-21T00:58:12.394177882Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 00:58:12.394223 containerd[2473]: time="2026-01-21T00:58:12.394216062Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 00:58:12.394257 containerd[2473]: time="2026-01-21T00:58:12.394250398Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 00:58:12.394293 containerd[2473]: time="2026-01-21T00:58:12.394286266Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 00:58:12.394354 containerd[2473]: time="2026-01-21T00:58:12.394337625Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 00:58:12.394399 containerd[2473]: time="2026-01-21T00:58:12.394392899Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 00:58:12.394455 containerd[2473]: time="2026-01-21T00:58:12.394433800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 00:58:12.394485 containerd[2473]: time="2026-01-21T00:58:12.394480576Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 00:58:12.394514 containerd[2473]: time="2026-01-21T00:58:12.394507005Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 00:58:12.394646 containerd[2473]: time="2026-01-21T00:58:12.394636764Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 00:58:12.394701 containerd[2473]: time="2026-01-21T00:58:12.394692874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 00:58:12.394739 containerd[2473]: time="2026-01-21T00:58:12.394732418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 00:58:12.394793 containerd[2473]: time="2026-01-21T00:58:12.394785907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 00:58:12.394835 containerd[2473]: time="2026-01-21T00:58:12.394824830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 00:58:12.394884 containerd[2473]: time="2026-01-21T00:58:12.394875892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 00:58:12.394922 containerd[2473]: time="2026-01-21T00:58:12.394915263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 00:58:12.394961 containerd[2473]: time="2026-01-21T00:58:12.394954779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 00:58:12.394999 containerd[2473]: time="2026-01-21T00:58:12.394991993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 00:58:12.395032 containerd[2473]: time="2026-01-21T00:58:12.395026730Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 00:58:12.395059 containerd[2473]: time="2026-01-21T00:58:12.395052887Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 00:58:12.395104 containerd[2473]: time="2026-01-21T00:58:12.395098341Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 00:58:12.395163 containerd[2473]: time="2026-01-21T00:58:12.395154076Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 00:58:12.395190 containerd[2473]: time="2026-01-21T00:58:12.395186030Z" level=info msg="Start snapshots syncer" Jan 21 00:58:12.395231 containerd[2473]: time="2026-01-21T00:58:12.395223810Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 00:58:12.396328 containerd[2473]: time="2026-01-21T00:58:12.395928513Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 00:58:12.396328 containerd[2473]: time="2026-01-21T00:58:12.396056144Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396196535Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396324265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396366759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396383828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396398151Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396425912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396439839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396455441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396469208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396494254Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 00:58:12.396541 containerd[2473]: time="2026-01-21T00:58:12.396534679Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 00:58:12.397256 containerd[2473]: time="2026-01-21T00:58:12.396553746Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 00:58:12.397298 containerd[2473]: time="2026-01-21T00:58:12.397257812Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 00:58:12.397298 containerd[2473]: time="2026-01-21T00:58:12.397278251Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 00:58:12.397298 containerd[2473]: time="2026-01-21T00:58:12.397288630Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 00:58:12.397368 containerd[2473]: time="2026-01-21T00:58:12.397318264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 00:58:12.397368 containerd[2473]: time="2026-01-21T00:58:12.397335068Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 00:58:12.397368 containerd[2473]: time="2026-01-21T00:58:12.397349032Z" level=info msg="runtime interface created" Jan 21 00:58:12.397368 containerd[2473]: time="2026-01-21T00:58:12.397358267Z" level=info msg="created NRI interface" Jan 21 00:58:12.397446 containerd[2473]: time="2026-01-21T00:58:12.397367666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 00:58:12.397446 containerd[2473]: time="2026-01-21T00:58:12.397396904Z" level=info msg="Connect containerd service" Jan 21 00:58:12.397446 containerd[2473]: time="2026-01-21T00:58:12.397428318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 00:58:12.398489 containerd[2473]: time="2026-01-21T00:58:12.398454923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 00:58:12.431905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:58:12.439114 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851524611Z" level=info msg="Start subscribing containerd event" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851582165Z" level=info msg="Start recovering state" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851676343Z" level=info msg="Start event monitor" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851687590Z" level=info msg="Start cni network conf syncer for default" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851694724Z" level=info msg="Start streaming server" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851704073Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851711478Z" level=info msg="runtime interface starting up..." Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851717913Z" level=info msg="starting plugins..." Jan 21 00:58:12.851798 containerd[2473]: time="2026-01-21T00:58:12.851730285Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 00:58:12.854935 containerd[2473]: time="2026-01-21T00:58:12.852361386Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 00:58:12.854935 containerd[2473]: time="2026-01-21T00:58:12.852423938Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 00:58:12.852686 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 00:58:12.855244 containerd[2473]: time="2026-01-21T00:58:12.855225066Z" level=info msg="containerd successfully booted in 0.487134s" Jan 21 00:58:12.855940 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 00:58:12.858683 systemd[1]: Startup finished in 4.405s (kernel) + 11.651s (initrd) + 14.141s (userspace) = 30.197s. Jan 21 00:58:12.969276 kubelet[2593]: E0121 00:58:12.969241 2593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:58:12.970913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:58:12.971030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:58:12.971342 systemd[1]: kubelet.service: Consumed 935ms CPU time, 265.4M memory peak. Jan 21 00:58:13.197108 login[2579]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:13.210428 login[2580]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:13.219013 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 00:58:13.221872 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 00:58:13.225011 systemd-logind[2449]: New session 1 of user core. Jan 21 00:58:13.233126 systemd-logind[2449]: New session 2 of user core. Jan 21 00:58:13.261088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 00:58:13.263111 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 00:58:13.278578 (systemd)[2618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:13.280474 systemd-logind[2449]: New session 3 of user core. Jan 21 00:58:13.450100 systemd[2618]: Queued start job for default target default.target. Jan 21 00:58:13.457636 systemd[2618]: Created slice app.slice - User Application Slice. Jan 21 00:58:13.457671 systemd[2618]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 00:58:13.457687 systemd[2618]: Reached target paths.target - Paths. Jan 21 00:58:13.458217 systemd[2618]: Reached target timers.target - Timers. Jan 21 00:58:13.460754 systemd[2618]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 00:58:13.461944 systemd[2618]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 00:58:13.472406 systemd[2618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 00:58:13.473450 systemd[2618]: Reached target sockets.target - Sockets. Jan 21 00:58:13.475083 systemd[2618]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 00:58:13.475259 systemd[2618]: Reached target basic.target - Basic System. Jan 21 00:58:13.475364 systemd[2618]: Reached target default.target - Main User Target. Jan 21 00:58:13.475446 systemd[2618]: Startup finished in 191ms. Jan 21 00:58:13.475508 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 00:58:13.483291 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 00:58:13.484044 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 21 00:58:13.623861 waagent[2578]: 2026-01-21T00:58:13.623764Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 21 00:58:13.625441 waagent[2578]: 2026-01-21T00:58:13.624451Z INFO Daemon Daemon OS: flatcar 4547.0.0 Jan 21 00:58:13.626625 waagent[2578]: 2026-01-21T00:58:13.626554Z INFO Daemon Daemon Python: 3.11.13 Jan 21 00:58:13.628020 waagent[2578]: 2026-01-21T00:58:13.627976Z INFO Daemon Daemon Run daemon Jan 21 00:58:13.628967 waagent[2578]: 2026-01-21T00:58:13.628938Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Jan 21 00:58:13.629666 waagent[2578]: 2026-01-21T00:58:13.629204Z INFO Daemon Daemon Using waagent for provisioning Jan 21 00:58:13.632325 waagent[2578]: 2026-01-21T00:58:13.632298Z INFO Daemon Daemon Activate resource disk Jan 21 00:58:13.633180 waagent[2578]: 2026-01-21T00:58:13.633060Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 21 00:58:13.636829 waagent[2578]: 2026-01-21T00:58:13.636780Z INFO Daemon Daemon Found device: None Jan 21 00:58:13.637745 waagent[2578]: 2026-01-21T00:58:13.637710Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 21 00:58:13.639374 waagent[2578]: 2026-01-21T00:58:13.638846Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 21 00:58:13.641866 waagent[2578]: 2026-01-21T00:58:13.641821Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 21 00:58:13.643085 waagent[2578]: 2026-01-21T00:58:13.643054Z INFO Daemon Daemon Running default provisioning handler Jan 21 00:58:13.649518 waagent[2578]: 2026-01-21T00:58:13.649210Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 21 00:58:13.652658 waagent[2578]: 2026-01-21T00:58:13.652618Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 21 00:58:13.654709 waagent[2578]: 2026-01-21T00:58:13.653521Z INFO Daemon Daemon cloud-init is enabled: False Jan 21 00:58:13.654709 waagent[2578]: 2026-01-21T00:58:13.653818Z INFO Daemon Daemon Copying ovf-env.xml Jan 21 00:58:13.743446 waagent[2578]: 2026-01-21T00:58:13.743361Z INFO Daemon Daemon Successfully mounted dvd Jan 21 00:58:13.772327 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 21 00:58:13.774587 waagent[2578]: 2026-01-21T00:58:13.774539Z INFO Daemon Daemon Detect protocol endpoint Jan 21 00:58:13.775825 waagent[2578]: 2026-01-21T00:58:13.775749Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 21 00:58:13.776477 waagent[2578]: 2026-01-21T00:58:13.776065Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 21 00:58:13.778716 waagent[2578]: 2026-01-21T00:58:13.778649Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 21 00:58:13.780211 waagent[2578]: 2026-01-21T00:58:13.780177Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 21 00:58:13.781444 waagent[2578]: 2026-01-21T00:58:13.780701Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 21 00:58:13.801654 waagent[2578]: 2026-01-21T00:58:13.801618Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 21 00:58:13.803973 waagent[2578]: 2026-01-21T00:58:13.802279Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 21 00:58:13.803973 waagent[2578]: 2026-01-21T00:58:13.802517Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 21 00:58:13.918795 waagent[2578]: 2026-01-21T00:58:13.918721Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 21 00:58:13.920307 waagent[2578]: 2026-01-21T00:58:13.919083Z INFO Daemon Daemon Forcing an update of the goal state. Jan 21 00:58:13.933325 waagent[2578]: 2026-01-21T00:58:13.933292Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 21 00:58:13.951103 waagent[2578]: 2026-01-21T00:58:13.951073Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 21 00:58:13.952677 waagent[2578]: 2026-01-21T00:58:13.952643Z INFO Daemon Jan 21 00:58:13.953515 waagent[2578]: 2026-01-21T00:58:13.953446Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8d7ecc68-fd0b-4379-800f-26456770cb94 eTag: 9779692161971892933 source: Fabric] Jan 21 00:58:13.956376 waagent[2578]: 2026-01-21T00:58:13.956336Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 21 00:58:13.958183 waagent[2578]: 2026-01-21T00:58:13.958152Z INFO Daemon Jan 21 00:58:13.958953 waagent[2578]: 2026-01-21T00:58:13.958885Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 21 00:58:13.965170 waagent[2578]: 2026-01-21T00:58:13.965142Z INFO Daemon Daemon Downloading artifacts profile blob Jan 21 00:58:14.058182 waagent[2578]: 2026-01-21T00:58:14.058096Z INFO Daemon Downloaded certificate {'thumbprint': '040A3E75D5A08F55F65CF5D2E9108496C69C3932', 'hasPrivateKey': True} Jan 21 00:58:14.059597 waagent[2578]: 2026-01-21T00:58:14.058676Z INFO Daemon Fetch goal state completed Jan 21 00:58:14.063580 waagent[2578]: 2026-01-21T00:58:14.063542Z INFO Daemon Daemon Starting provisioning Jan 21 00:58:14.064279 waagent[2578]: 2026-01-21T00:58:14.064061Z INFO Daemon Daemon Handle ovf-env.xml. Jan 21 00:58:14.064940 waagent[2578]: 2026-01-21T00:58:14.064277Z INFO Daemon Daemon Set hostname [ci-4547.0.0-n-ed178c4493] Jan 21 00:58:14.067684 waagent[2578]: 2026-01-21T00:58:14.066323Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-n-ed178c4493] Jan 21 00:58:14.067684 waagent[2578]: 2026-01-21T00:58:14.066698Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 21 00:58:14.067684 waagent[2578]: 2026-01-21T00:58:14.067046Z INFO Daemon Daemon Primary interface is [eth0] Jan 21 00:58:14.076787 systemd-networkd[2103]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:58:14.076795 systemd-networkd[2103]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 21 00:58:14.076854 systemd-networkd[2103]: eth0: DHCP lease lost Jan 21 00:58:14.093034 waagent[2578]: 2026-01-21T00:58:14.092990Z INFO Daemon Daemon Create user account if not exists Jan 21 00:58:14.095802 waagent[2578]: 2026-01-21T00:58:14.094070Z INFO Daemon Daemon User core already exists, skip useradd Jan 21 00:58:14.095802 waagent[2578]: 2026-01-21T00:58:14.094378Z INFO Daemon Daemon Configure sudoer Jan 21 00:58:14.097828 waagent[2578]: 2026-01-21T00:58:14.097753Z INFO Daemon Daemon Configure sshd Jan 21 00:58:14.100814 systemd-networkd[2103]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 21 00:58:14.101623 waagent[2578]: 2026-01-21T00:58:14.101581Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 21 00:58:14.106134 waagent[2578]: 2026-01-21T00:58:14.102027Z INFO Daemon Daemon Deploy ssh public key. Jan 21 00:58:15.204096 waagent[2578]: 2026-01-21T00:58:15.204046Z INFO Daemon Daemon Provisioning complete Jan 21 00:58:15.214734 waagent[2578]: 2026-01-21T00:58:15.214705Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 21 00:58:15.215830 waagent[2578]: 2026-01-21T00:58:15.215100Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 21 00:58:15.215830 waagent[2578]: 2026-01-21T00:58:15.215288Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 21 00:58:15.315808 waagent[2671]: 2026-01-21T00:58:15.315727Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 21 00:58:15.316061 waagent[2671]: 2026-01-21T00:58:15.315837Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Jan 21 00:58:15.316061 waagent[2671]: 2026-01-21T00:58:15.315878Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 21 00:58:15.316061 waagent[2671]: 2026-01-21T00:58:15.315917Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 21 00:58:15.346569 waagent[2671]: 2026-01-21T00:58:15.346520Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 21 00:58:15.346706 waagent[2671]: 2026-01-21T00:58:15.346680Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 00:58:15.346781 waagent[2671]: 2026-01-21T00:58:15.346733Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 00:58:15.351335 waagent[2671]: 2026-01-21T00:58:15.351285Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 21 00:58:15.358813 waagent[2671]: 2026-01-21T00:58:15.358778Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 21 00:58:15.359189 waagent[2671]: 2026-01-21T00:58:15.359159Z INFO ExtHandler Jan 21 00:58:15.359237 waagent[2671]: 2026-01-21T00:58:15.359215Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 673ad303-180d-4797-861a-c32fbeb06e8d eTag: 9779692161971892933 source: Fabric] Jan 21 00:58:15.359465 waagent[2671]: 2026-01-21T00:58:15.359441Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 21 00:58:15.359889 waagent[2671]: 2026-01-21T00:58:15.359858Z INFO ExtHandler Jan 21 00:58:15.359940 waagent[2671]: 2026-01-21T00:58:15.359903Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 21 00:58:15.363347 waagent[2671]: 2026-01-21T00:58:15.363317Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 21 00:58:15.461827 waagent[2671]: 2026-01-21T00:58:15.461717Z INFO ExtHandler Downloaded certificate {'thumbprint': '040A3E75D5A08F55F65CF5D2E9108496C69C3932', 'hasPrivateKey': True} Jan 21 00:58:15.462122 waagent[2671]: 2026-01-21T00:58:15.462092Z INFO ExtHandler Fetch goal state completed Jan 21 00:58:15.471994 waagent[2671]: 2026-01-21T00:58:15.471949Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 21 00:58:15.476175 waagent[2671]: 2026-01-21T00:58:15.476128Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2671 Jan 21 00:58:15.476277 waagent[2671]: 2026-01-21T00:58:15.476252Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 21 00:58:15.476516 waagent[2671]: 2026-01-21T00:58:15.476491Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 21 00:58:15.477581 waagent[2671]: 2026-01-21T00:58:15.477542Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 21 00:58:15.477928 waagent[2671]: 2026-01-21T00:58:15.477903Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 21 00:58:15.478025 waagent[2671]: 2026-01-21T00:58:15.478004Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 21 00:58:15.478389 waagent[2671]: 2026-01-21T00:58:15.478366Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 21 00:58:15.539565 waagent[2671]: 2026-01-21T00:58:15.539537Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 21 00:58:15.539702 waagent[2671]: 2026-01-21T00:58:15.539680Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 21 00:58:15.544791 waagent[2671]: 2026-01-21T00:58:15.544551Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 21 00:58:15.549746 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit waagent.service)... Jan 21 00:58:15.549759 systemd[1]: Reloading... Jan 21 00:58:15.620804 zram_generator::config[2726]: No configuration found. Jan 21 00:58:15.807054 systemd[1]: Reloading finished in 257 ms. Jan 21 00:58:15.820381 waagent[2671]: 2026-01-21T00:58:15.819320Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 21 00:58:15.820381 waagent[2671]: 2026-01-21T00:58:15.819448Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 21 00:58:15.837324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#225 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 21 00:58:16.093048 waagent[2671]: 2026-01-21T00:58:16.092955Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 21 00:58:16.093257 waagent[2671]: 2026-01-21T00:58:16.093230Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 21 00:58:16.093840 waagent[2671]: 2026-01-21T00:58:16.093808Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 21 00:58:16.094210 waagent[2671]: 2026-01-21T00:58:16.094184Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 21 00:58:16.094407 waagent[2671]: 2026-01-21T00:58:16.094366Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 21 00:58:16.094463 waagent[2671]: 2026-01-21T00:58:16.094411Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 21 00:58:16.094561 waagent[2671]: 2026-01-21T00:58:16.094465Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 00:58:16.094715 waagent[2671]: 2026-01-21T00:58:16.094694Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 00:58:16.094921 waagent[2671]: 2026-01-21T00:58:16.094898Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 21 00:58:16.095055 waagent[2671]: 2026-01-21T00:58:16.095029Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 21 00:58:16.095198 waagent[2671]: 2026-01-21T00:58:16.095173Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 21 00:58:16.095742 waagent[2671]: 2026-01-21T00:58:16.095715Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 21 00:58:16.095742 waagent[2671]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 21 00:58:16.095742 waagent[2671]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 21 00:58:16.095742 waagent[2671]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 21 00:58:16.095742 waagent[2671]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 21 00:58:16.095742 waagent[2671]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 21 00:58:16.095742 waagent[2671]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 21 00:58:16.096033 waagent[2671]: 2026-01-21T00:58:16.095988Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 21 00:58:16.096572 waagent[2671]: 2026-01-21T00:58:16.096463Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 21 00:58:16.096572 waagent[2671]: 2026-01-21T00:58:16.096535Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 21 00:58:16.096684 waagent[2671]: 2026-01-21T00:58:16.096646Z INFO EnvHandler ExtHandler Configure routes Jan 21 00:58:16.096723 waagent[2671]: 2026-01-21T00:58:16.096706Z INFO EnvHandler ExtHandler Gateway:None Jan 21 00:58:16.096753 waagent[2671]: 2026-01-21T00:58:16.096741Z INFO EnvHandler ExtHandler Routes:None Jan 21 00:58:16.104714 waagent[2671]: 2026-01-21T00:58:16.103690Z INFO ExtHandler ExtHandler Jan 21 00:58:16.104714 waagent[2671]: 2026-01-21T00:58:16.103735Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0f553f75-e7af-45f1-ab9f-52f1930d95d2 correlation fdb5292d-f680-41b0-9fdf-9c324a4163ca created: 2026-01-21T00:57:21.186546Z] Jan 21 00:58:16.104714 waagent[2671]: 2026-01-21T00:58:16.103974Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 21 00:58:16.104714 waagent[2671]: 2026-01-21T00:58:16.104305Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 21 00:58:16.151594 waagent[2671]: 2026-01-21T00:58:16.151552Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 21 00:58:16.151594 waagent[2671]: Try `iptables -h' or 'iptables --help' for more information.) Jan 21 00:58:16.151900 waagent[2671]: 2026-01-21T00:58:16.151874Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FBB8AD02-1BC2-41EE-98EB-2C34924803A1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 21 00:58:16.163884 waagent[2671]: 2026-01-21T00:58:16.163838Z INFO MonitorHandler ExtHandler Network interfaces: Jan 21 00:58:16.163884 waagent[2671]: Executing ['ip', '-a', '-o', 'link']: Jan 21 00:58:16.163884 waagent[2671]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 21 00:58:16.163884 waagent[2671]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:64:57 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx6045bddd6457 Jan 21 00:58:16.163884 waagent[2671]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:64:57 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 21 00:58:16.163884 waagent[2671]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 21 00:58:16.163884 waagent[2671]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 21 00:58:16.163884 waagent[2671]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 21 00:58:16.163884 waagent[2671]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 21 00:58:16.163884 waagent[2671]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 21 00:58:16.163884 waagent[2671]: 2: eth0 inet6 fe80::6245:bdff:fedd:6457/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 21 00:58:16.191381 waagent[2671]: 2026-01-21T00:58:16.191335Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 21 00:58:16.191381 waagent[2671]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.191381 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.191381 waagent[2671]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.191381 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.191381 waagent[2671]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.191381 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.191381 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 21 00:58:16.191381 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 21 00:58:16.191381 waagent[2671]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 21 00:58:16.193847 waagent[2671]: 2026-01-21T00:58:16.193803Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 21 00:58:16.193847 waagent[2671]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.193847 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.193847 waagent[2671]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.193847 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.193847 waagent[2671]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 21 00:58:16.193847 waagent[2671]: pkts bytes target prot opt in out source destination Jan 21 00:58:16.193847 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 21 00:58:16.193847 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 21 00:58:16.193847 waagent[2671]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 21 00:58:23.096551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 00:58:23.098047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:58:23.611711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:58:23.614790 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:58:23.653611 kubelet[2826]: E0121 00:58:23.653562 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:58:23.656632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:58:23.656760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:58:23.657117 systemd[1]: kubelet.service: Consumed 137ms CPU time, 110.6M memory peak. Jan 21 00:58:33.846748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 00:58:33.848211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:58:34.287882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:58:34.293953 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:58:34.329335 kubelet[2842]: E0121 00:58:34.329302 2842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:58:34.331037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:58:34.331207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:58:34.331628 systemd[1]: kubelet.service: Consumed 126ms CPU time, 108.2M memory peak. Jan 21 00:58:35.044797 chronyd[2428]: Selected source PHC0 Jan 21 00:58:37.495260 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 00:58:37.496323 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:48048.service - OpenSSH per-connection server daemon (10.200.16.10:48048). Jan 21 00:58:38.239826 sshd[2850]: Accepted publickey for core from 10.200.16.10 port 48048 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:38.240925 sshd-session[2850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:38.244829 systemd-logind[2449]: New session 4 of user core. Jan 21 00:58:38.255941 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 00:58:38.690501 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:48054.service - OpenSSH per-connection server daemon (10.200.16.10:48054). Jan 21 00:58:39.278156 sshd[2857]: Accepted publickey for core from 10.200.16.10 port 48054 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:39.279345 sshd-session[2857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:39.283740 systemd-logind[2449]: New session 5 of user core. Jan 21 00:58:39.289917 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 00:58:39.612910 sshd[2861]: Connection closed by 10.200.16.10 port 48054 Jan 21 00:58:39.613928 sshd-session[2857]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:39.616541 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:48054.service: Deactivated successfully. Jan 21 00:58:39.618183 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 00:58:39.619821 systemd-logind[2449]: Session 5 logged out. Waiting for processes to exit. Jan 21 00:58:39.620427 systemd-logind[2449]: Removed session 5. Jan 21 00:58:39.744225 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:56678.service - OpenSSH per-connection server daemon (10.200.16.10:56678). Jan 21 00:58:40.335170 sshd[2867]: Accepted publickey for core from 10.200.16.10 port 56678 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:40.335928 sshd-session[2867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:40.340303 systemd-logind[2449]: New session 6 of user core. Jan 21 00:58:40.349950 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 00:58:40.668077 sshd[2871]: Connection closed by 10.200.16.10 port 56678 Jan 21 00:58:40.668945 sshd-session[2867]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:40.672389 systemd-logind[2449]: Session 6 logged out. Waiting for processes to exit. Jan 21 00:58:40.672750 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:56678.service: Deactivated successfully. Jan 21 00:58:40.674272 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 00:58:40.675726 systemd-logind[2449]: Removed session 6. Jan 21 00:58:40.796276 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:56684.service - OpenSSH per-connection server daemon (10.200.16.10:56684). Jan 21 00:58:41.391238 sshd[2877]: Accepted publickey for core from 10.200.16.10 port 56684 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:41.392452 sshd-session[2877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:41.396609 systemd-logind[2449]: New session 7 of user core. Jan 21 00:58:41.403926 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 00:58:41.726832 sshd[2881]: Connection closed by 10.200.16.10 port 56684 Jan 21 00:58:41.728493 sshd-session[2877]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:41.731374 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:56684.service: Deactivated successfully. Jan 21 00:58:41.732878 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 00:58:41.733914 systemd-logind[2449]: Session 7 logged out. Waiting for processes to exit. Jan 21 00:58:41.734592 systemd-logind[2449]: Removed session 7. Jan 21 00:58:41.853276 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:56690.service - OpenSSH per-connection server daemon (10.200.16.10:56690). Jan 21 00:58:42.443411 sshd[2887]: Accepted publickey for core from 10.200.16.10 port 56690 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:42.443921 sshd-session[2887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:42.448432 systemd-logind[2449]: New session 8 of user core. Jan 21 00:58:42.453942 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 00:58:42.824979 sudo[2892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 00:58:42.825215 sudo[2892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:58:42.831574 sudo[2892]: pam_unix(sudo:session): session closed for user root Jan 21 00:58:42.942345 sshd[2891]: Connection closed by 10.200.16.10 port 56690 Jan 21 00:58:42.942993 sshd-session[2887]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:42.946693 systemd-logind[2449]: Session 8 logged out. Waiting for processes to exit. Jan 21 00:58:42.947103 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:56690.service: Deactivated successfully. Jan 21 00:58:42.948821 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 00:58:42.950359 systemd-logind[2449]: Removed session 8. Jan 21 00:58:43.066324 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:56696.service - OpenSSH per-connection server daemon (10.200.16.10:56696). Jan 21 00:58:43.663109 sshd[2899]: Accepted publickey for core from 10.200.16.10 port 56696 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:43.664300 sshd-session[2899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:43.668595 systemd-logind[2449]: New session 9 of user core. Jan 21 00:58:43.675945 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 00:58:43.894231 sudo[2905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 00:58:43.894484 sudo[2905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:58:43.898552 sudo[2905]: pam_unix(sudo:session): session closed for user root Jan 21 00:58:43.903293 sudo[2904]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 00:58:43.903524 sudo[2904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:58:43.909632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 00:58:43.939000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 00:58:43.940913 augenrules[2929]: No rules Jan 21 00:58:43.941548 kernel: kauditd_printk_skb: 160 callbacks suppressed Jan 21 00:58:43.941588 kernel: audit: type=1305 audit(1768957123.939:256): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 00:58:43.939000 audit[2929]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd86ec760 a2=420 a3=0 items=0 ppid=2910 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.944645 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 00:58:43.945436 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 00:58:43.947717 sudo[2904]: pam_unix(sudo:session): session closed for user root Jan 21 00:58:43.939000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:58:43.951225 kernel: audit: type=1300 audit(1768957123.939:256): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd86ec760 a2=420 a3=0 items=0 ppid=2910 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.951267 kernel: audit: type=1327 audit(1768957123.939:256): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:58:43.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.954485 kernel: audit: type=1130 audit(1768957123.943:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.957608 kernel: audit: type=1131 audit(1768957123.943:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.943000 audit[2904]: USER_END pid=2904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.960826 kernel: audit: type=1106 audit(1768957123.943:259): pid=2904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.943000 audit[2904]: CRED_DISP pid=2904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:43.963976 kernel: audit: type=1104 audit(1768957123.943:260): pid=2904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.060783 sshd[2903]: Connection closed by 10.200.16.10 port 56696 Jan 21 00:58:44.061171 sshd-session[2899]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:44.061000 audit[2899]: USER_END pid=2899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.065009 systemd-logind[2449]: Session 9 logged out. Waiting for processes to exit. Jan 21 00:58:44.065699 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:56696.service: Deactivated successfully. Jan 21 00:58:44.067261 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 00:58:44.069992 systemd-logind[2449]: Removed session 9. Jan 21 00:58:44.061000 audit[2899]: CRED_DISP pid=2899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.074216 kernel: audit: type=1106 audit(1768957124.061:261): pid=2899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.074254 kernel: audit: type=1104 audit(1768957124.061:262): pid=2899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.39:22-10.200.16.10:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.077230 kernel: audit: type=1131 audit(1768957124.065:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.39:22-10.200.16.10:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.184343 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:56712.service - OpenSSH per-connection server daemon (10.200.16.10:56712). Jan 21 00:58:44.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.39:22-10.200.16.10:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.346418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 21 00:58:44.347908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:58:44.768000 audit[2938]: USER_ACCT pid=2938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.769904 sshd[2938]: Accepted publickey for core from 10.200.16.10 port 56712 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 00:58:44.769000 audit[2938]: CRED_ACQ pid=2938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.769000 audit[2938]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4f549640 a2=3 a3=0 items=0 ppid=1 pid=2938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.769000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:58:44.771134 sshd-session[2938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:58:44.775821 systemd-logind[2449]: New session 10 of user core. Jan 21 00:58:44.781947 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 00:58:44.783000 audit[2938]: USER_START pid=2938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.784000 audit[2945]: CRED_ACQ pid=2945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:58:44.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.818682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:58:44.821726 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:58:44.851128 kubelet[2951]: E0121 00:58:44.851092 2951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:58:44.852598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:58:44.852709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:58:44.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:58:44.853105 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.2M memory peak. Jan 21 00:58:44.994000 audit[2959]: USER_ACCT pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.995290 sudo[2959]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 00:58:44.994000 audit[2959]: CRED_REFR pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:44.995546 sudo[2959]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:58:44.994000 audit[2959]: USER_START pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:47.123323 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 00:58:47.136040 (dockerd)[2977]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 00:58:48.767298 dockerd[2977]: time="2026-01-21T00:58:48.767067481Z" level=info msg="Starting up" Jan 21 00:58:48.768818 dockerd[2977]: time="2026-01-21T00:58:48.768784402Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 00:58:48.778375 dockerd[2977]: time="2026-01-21T00:58:48.778325766Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 00:58:48.808875 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1426650258-merged.mount: Deactivated successfully. Jan 21 00:58:48.879459 dockerd[2977]: time="2026-01-21T00:58:48.879315212Z" level=info msg="Loading containers: start." Jan 21 00:58:48.892802 kernel: Initializing XFRM netlink socket Jan 21 00:58:48.970805 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 21 00:58:48.970879 kernel: audit: type=1325 audit(1768957128.969:275): table=nat:5 family=2 entries=2 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.969000 audit[3023]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.982585 kernel: audit: type=1300 audit(1768957128.969:275): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffe7bcc990 a2=0 a3=0 items=0 ppid=2977 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.969000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffe7bcc990 a2=0 a3=0 items=0 ppid=2977 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 00:58:48.985801 kernel: audit: type=1327 audit(1768957128.969:275): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 00:58:48.985858 kernel: audit: type=1325 audit(1768957128.973:276): table=filter:6 family=2 entries=2 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.973000 audit[3025]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.973000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffe62759f0 a2=0 a3=0 items=0 ppid=2977 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.994859 kernel: audit: type=1300 audit(1768957128.973:276): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffe62759f0 a2=0 a3=0 items=0 ppid=2977 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 00:58:48.997743 kernel: audit: type=1327 audit(1768957128.973:276): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 00:58:48.973000 audit[3027]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.000928 kernel: audit: type=1325 audit(1768957128.973:277): table=filter:7 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.973000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc33d560b0 a2=0 a3=0 items=0 ppid=2977 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.005629 kernel: audit: type=1300 audit(1768957128.973:277): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc33d560b0 a2=0 a3=0 items=0 ppid=2977 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 00:58:49.009027 kernel: audit: type=1327 audit(1768957128.973:277): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 00:58:48.973000 audit[3029]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.011612 kernel: audit: type=1325 audit(1768957128.973:278): table=filter:8 family=2 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.973000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfaf93b50 a2=0 a3=0 items=0 ppid=2977 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 00:58:48.980000 audit[3031]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.980000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe90eca560 a2=0 a3=0 items=0 ppid=2977 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 00:58:48.982000 audit[3033]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.982000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea20e9f90 a2=0 a3=0 items=0 ppid=2977 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:58:48.985000 audit[3035]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.985000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd995efdb0 a2=0 a3=0 items=0 ppid=2977 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:58:48.988000 audit[3037]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:48.988000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe85d95830 a2=0 a3=0 items=0 ppid=2977 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:48.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 00:58:49.036000 audit[3040]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.036000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdabe6a720 a2=0 a3=0 items=0 ppid=2977 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 00:58:49.039000 audit[3042]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.039000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcf36375f0 a2=0 a3=0 items=0 ppid=2977 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 00:58:49.041000 audit[3044]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.041000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe5128bcd0 a2=0 a3=0 items=0 ppid=2977 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 00:58:49.043000 audit[3046]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.043000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff13efcc60 a2=0 a3=0 items=0 ppid=2977 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:58:49.044000 audit[3048]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.044000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcab177eb0 a2=0 a3=0 items=0 ppid=2977 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 00:58:49.144000 audit[3078]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.144000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd06ff41f0 a2=0 a3=0 items=0 ppid=2977 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 00:58:49.145000 audit[3080]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.145000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe18162030 a2=0 a3=0 items=0 ppid=2977 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 00:58:49.147000 audit[3082]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.147000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd7f737c0 a2=0 a3=0 items=0 ppid=2977 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 00:58:49.148000 audit[3084]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.148000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff86c40b20 a2=0 a3=0 items=0 ppid=2977 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 00:58:49.150000 audit[3086]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.150000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffecfd89620 a2=0 a3=0 items=0 ppid=2977 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 00:58:49.152000 audit[3088]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.152000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdcff7b070 a2=0 a3=0 items=0 ppid=2977 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:58:49.153000 audit[3090]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.153000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2e13e660 a2=0 a3=0 items=0 ppid=2977 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:58:49.155000 audit[3092]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.155000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffff8af8d00 a2=0 a3=0 items=0 ppid=2977 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 00:58:49.157000 audit[3094]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.157000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd74c8b4f0 a2=0 a3=0 items=0 ppid=2977 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 00:58:49.159000 audit[3096]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.159000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc77370ea0 a2=0 a3=0 items=0 ppid=2977 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 00:58:49.160000 audit[3098]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.160000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe70b206b0 a2=0 a3=0 items=0 ppid=2977 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 00:58:49.162000 audit[3100]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.162000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd77428010 a2=0 a3=0 items=0 ppid=2977 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:58:49.164000 audit[3102]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.164000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd45fc6190 a2=0 a3=0 items=0 ppid=2977 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 00:58:49.168000 audit[3107]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.168000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbae551b0 a2=0 a3=0 items=0 ppid=2977 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 00:58:49.170000 audit[3109]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.170000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd9e7edd0 a2=0 a3=0 items=0 ppid=2977 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 00:58:49.171000 audit[3111]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.171000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffca21c3870 a2=0 a3=0 items=0 ppid=2977 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 00:58:49.173000 audit[3113]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.173000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc2584060 a2=0 a3=0 items=0 ppid=2977 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 00:58:49.175000 audit[3115]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.175000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd6c75d4c0 a2=0 a3=0 items=0 ppid=2977 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 00:58:49.176000 audit[3117]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:49.176000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd302d4020 a2=0 a3=0 items=0 ppid=2977 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 00:58:49.230000 audit[3122]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.230000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcfcdf6a40 a2=0 a3=0 items=0 ppid=2977 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 00:58:49.232000 audit[3124]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.232000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd6deb5480 a2=0 a3=0 items=0 ppid=2977 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 00:58:49.239000 audit[3132]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.239000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc03e2d950 a2=0 a3=0 items=0 ppid=2977 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 00:58:49.243000 audit[3137]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.243000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3fcbd5f0 a2=0 a3=0 items=0 ppid=2977 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 00:58:49.245000 audit[3139]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.245000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff117a0db0 a2=0 a3=0 items=0 ppid=2977 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 00:58:49.247000 audit[3141]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.247000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd478413d0 a2=0 a3=0 items=0 ppid=2977 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 00:58:49.249000 audit[3143]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.249000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd84142360 a2=0 a3=0 items=0 ppid=2977 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:58:49.251000 audit[3145]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:49.251000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb09c7da0 a2=0 a3=0 items=0 ppid=2977 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:49.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 00:58:49.252242 systemd-networkd[2103]: docker0: Link UP Jan 21 00:58:49.263330 dockerd[2977]: time="2026-01-21T00:58:49.263299320Z" level=info msg="Loading containers: done." Jan 21 00:58:49.330151 dockerd[2977]: time="2026-01-21T00:58:49.329928985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 00:58:49.330151 dockerd[2977]: time="2026-01-21T00:58:49.330005603Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 00:58:49.330151 dockerd[2977]: time="2026-01-21T00:58:49.330084589Z" level=info msg="Initializing buildkit" Jan 21 00:58:49.365083 dockerd[2977]: time="2026-01-21T00:58:49.364901757Z" level=info msg="Completed buildkit initialization" Jan 21 00:58:49.371096 dockerd[2977]: time="2026-01-21T00:58:49.371057272Z" level=info msg="Daemon has completed initialization" Jan 21 00:58:49.371842 dockerd[2977]: time="2026-01-21T00:58:49.371207288Z" level=info msg="API listen on /run/docker.sock" Jan 21 00:58:49.371367 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 00:58:49.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:50.450878 containerd[2473]: time="2026-01-21T00:58:50.450833715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 21 00:58:51.147101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943064883.mount: Deactivated successfully. Jan 21 00:58:51.655332 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 21 00:58:52.115072 containerd[2473]: time="2026-01-21T00:58:52.115031270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:52.116835 containerd[2473]: time="2026-01-21T00:58:52.116809697Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 21 00:58:52.119323 containerd[2473]: time="2026-01-21T00:58:52.119150832Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:52.122073 containerd[2473]: time="2026-01-21T00:58:52.122047603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:52.122740 containerd[2473]: time="2026-01-21T00:58:52.122718256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.671847986s" Jan 21 00:58:52.122847 containerd[2473]: time="2026-01-21T00:58:52.122833412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 21 00:58:52.123483 containerd[2473]: time="2026-01-21T00:58:52.123462735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 21 00:58:53.810657 containerd[2473]: time="2026-01-21T00:58:53.810607828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:53.812798 containerd[2473]: time="2026-01-21T00:58:53.812555019Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 21 00:58:53.814840 containerd[2473]: time="2026-01-21T00:58:53.814780301Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:53.818193 containerd[2473]: time="2026-01-21T00:58:53.818149294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:53.819137 containerd[2473]: time="2026-01-21T00:58:53.818981593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.695492683s" Jan 21 00:58:53.819137 containerd[2473]: time="2026-01-21T00:58:53.819024956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 21 00:58:53.819689 containerd[2473]: time="2026-01-21T00:58:53.819665521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 21 00:58:55.096499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 21 00:58:55.099140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:58:55.625748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:58:55.630894 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 21 00:58:55.630965 kernel: audit: type=1130 audit(1768957135.625:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:55.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:55.634033 (kubelet)[3259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:58:55.648797 containerd[2473]: time="2026-01-21T00:58:55.648229144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:55.651740 containerd[2473]: time="2026-01-21T00:58:55.651715755Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 21 00:58:55.654153 containerd[2473]: time="2026-01-21T00:58:55.654129196Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:55.658343 containerd[2473]: time="2026-01-21T00:58:55.658310936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:55.659422 containerd[2473]: time="2026-01-21T00:58:55.658908896Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.839148748s" Jan 21 00:58:55.660013 containerd[2473]: time="2026-01-21T00:58:55.659985294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 21 00:58:55.662148 containerd[2473]: time="2026-01-21T00:58:55.661137108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 21 00:58:55.666859 kubelet[3259]: E0121 00:58:55.666832 3259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:58:55.668511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:58:55.668632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:58:55.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:58:55.669878 systemd[1]: kubelet.service: Consumed 129ms CPU time, 110.3M memory peak. Jan 21 00:58:55.672812 kernel: audit: type=1131 audit(1768957135.668:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:58:56.661609 update_engine[2453]: I20260121 00:58:56.661106 2453 update_attempter.cc:509] Updating boot flags... Jan 21 00:58:56.992536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043861731.mount: Deactivated successfully. Jan 21 00:58:57.388474 containerd[2473]: time="2026-01-21T00:58:57.388429593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:57.390638 containerd[2473]: time="2026-01-21T00:58:57.390546842Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 21 00:58:57.392466 containerd[2473]: time="2026-01-21T00:58:57.392441176Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:57.395508 containerd[2473]: time="2026-01-21T00:58:57.395466538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:57.395788 containerd[2473]: time="2026-01-21T00:58:57.395751003Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.734585728s" Jan 21 00:58:57.395822 containerd[2473]: time="2026-01-21T00:58:57.395795199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 21 00:58:57.396368 containerd[2473]: time="2026-01-21T00:58:57.396239280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 21 00:58:57.893521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950014907.mount: Deactivated successfully. Jan 21 00:58:58.311565 waagent[2671]: 2026-01-21T00:58:58.311454Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 21 00:58:58.330627 waagent[2671]: 2026-01-21T00:58:58.330587Z INFO ExtHandler Jan 21 00:58:58.330723 waagent[2671]: 2026-01-21T00:58:58.330672Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: be82af21-ad3c-4135-8757-11651a2229c1 eTag: 17684658825366295833 source: Fabric] Jan 21 00:58:58.330993 waagent[2671]: 2026-01-21T00:58:58.330965Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 21 00:58:58.331582 waagent[2671]: 2026-01-21T00:58:58.331317Z INFO ExtHandler Jan 21 00:58:58.331582 waagent[2671]: 2026-01-21T00:58:58.331362Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 21 00:58:58.376430 waagent[2671]: 2026-01-21T00:58:58.376398Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 21 00:58:58.453015 waagent[2671]: 2026-01-21T00:58:58.452969Z INFO ExtHandler Downloaded certificate {'thumbprint': '040A3E75D5A08F55F65CF5D2E9108496C69C3932', 'hasPrivateKey': True} Jan 21 00:58:58.453496 waagent[2671]: 2026-01-21T00:58:58.453417Z INFO ExtHandler Fetch goal state completed Jan 21 00:58:58.453838 waagent[2671]: 2026-01-21T00:58:58.453801Z INFO ExtHandler ExtHandler Jan 21 00:58:58.453889 waagent[2671]: 2026-01-21T00:58:58.453858Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 9df7e2da-8b3d-473f-ae97-14932d116ca0 correlation fdb5292d-f680-41b0-9fdf-9c324a4163ca created: 2026-01-21T00:58:51.905304Z] Jan 21 00:58:58.454212 waagent[2671]: 2026-01-21T00:58:58.454150Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 21 00:58:58.454759 waagent[2671]: 2026-01-21T00:58:58.454591Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 21 00:58:58.715090 containerd[2473]: time="2026-01-21T00:58:58.715048698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:58.717185 containerd[2473]: time="2026-01-21T00:58:58.716993767Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20251479" Jan 21 00:58:58.719438 containerd[2473]: time="2026-01-21T00:58:58.719413494Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:58.723009 containerd[2473]: time="2026-01-21T00:58:58.722982320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:58.723718 containerd[2473]: time="2026-01-21T00:58:58.723696380Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.327431791s" Jan 21 00:58:58.723817 containerd[2473]: time="2026-01-21T00:58:58.723803619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 21 00:58:58.724394 containerd[2473]: time="2026-01-21T00:58:58.724368908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 00:58:59.260445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804562444.mount: Deactivated successfully. Jan 21 00:58:59.274962 containerd[2473]: time="2026-01-21T00:58:59.274922526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:58:59.276973 containerd[2473]: time="2026-01-21T00:58:59.276803450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 00:58:59.279160 containerd[2473]: time="2026-01-21T00:58:59.279138095Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:58:59.282207 containerd[2473]: time="2026-01-21T00:58:59.282176007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:58:59.282975 containerd[2473]: time="2026-01-21T00:58:59.282622092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.216376ms" Jan 21 00:58:59.282975 containerd[2473]: time="2026-01-21T00:58:59.282648478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 21 00:58:59.283208 containerd[2473]: time="2026-01-21T00:58:59.283191414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 21 00:58:59.839275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619226425.mount: Deactivated successfully. Jan 21 00:59:01.759781 containerd[2473]: time="2026-01-21T00:59:01.759719467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:01.765856 containerd[2473]: time="2026-01-21T00:59:01.764919074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 21 00:59:01.767558 containerd[2473]: time="2026-01-21T00:59:01.767429874Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:01.774502 containerd[2473]: time="2026-01-21T00:59:01.774391049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:01.774921 containerd[2473]: time="2026-01-21T00:59:01.774898860Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.491684298s" Jan 21 00:59:01.774968 containerd[2473]: time="2026-01-21T00:59:01.774928203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 21 00:59:04.004134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:59:04.004306 systemd[1]: kubelet.service: Consumed 129ms CPU time, 110.3M memory peak. Jan 21 00:59:04.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:04.008784 kernel: audit: type=1130 audit(1768957144.003:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:04.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:04.011424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:59:04.012793 kernel: audit: type=1131 audit(1768957144.003:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:04.035897 systemd[1]: Reload requested from client PID 3463 ('systemctl') (unit session-10.scope)... Jan 21 00:59:04.035908 systemd[1]: Reloading... Jan 21 00:59:04.131798 zram_generator::config[3513]: No configuration found. Jan 21 00:59:04.334719 systemd[1]: Reloading finished in 298 ms. Jan 21 00:59:04.365142 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 00:59:04.365210 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 00:59:04.365480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:59:04.365543 systemd[1]: kubelet.service: Consumed 80ms CPU time, 81.3M memory peak. Jan 21 00:59:04.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:59:04.373016 kernel: audit: type=1130 audit(1768957144.364:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:59:04.373081 kernel: audit: type=1334 audit(1768957144.370:321): prog-id=87 op=LOAD Jan 21 00:59:04.370000 audit: BPF prog-id=87 op=LOAD Jan 21 00:59:04.370764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:59:04.370000 audit: BPF prog-id=77 op=UNLOAD Jan 21 00:59:04.370000 audit: BPF prog-id=88 op=LOAD Jan 21 00:59:04.378485 kernel: audit: type=1334 audit(1768957144.370:322): prog-id=77 op=UNLOAD Jan 21 00:59:04.378528 kernel: audit: type=1334 audit(1768957144.370:323): prog-id=88 op=LOAD Jan 21 00:59:04.370000 audit: BPF prog-id=89 op=LOAD Jan 21 00:59:04.380241 kernel: audit: type=1334 audit(1768957144.370:324): prog-id=89 op=LOAD Jan 21 00:59:04.380286 kernel: audit: type=1334 audit(1768957144.370:325): prog-id=78 op=UNLOAD Jan 21 00:59:04.370000 audit: BPF prog-id=78 op=UNLOAD Jan 21 00:59:04.370000 audit: BPF prog-id=79 op=UNLOAD Jan 21 00:59:04.384114 kernel: audit: type=1334 audit(1768957144.370:326): prog-id=79 op=UNLOAD Jan 21 00:59:04.371000 audit: BPF prog-id=90 op=LOAD Jan 21 00:59:04.385319 kernel: audit: type=1334 audit(1768957144.371:327): prog-id=90 op=LOAD Jan 21 00:59:04.371000 audit: BPF prog-id=71 op=UNLOAD Jan 21 00:59:04.371000 audit: BPF prog-id=91 op=LOAD Jan 21 00:59:04.371000 audit: BPF prog-id=92 op=LOAD Jan 21 00:59:04.371000 audit: BPF prog-id=72 op=UNLOAD Jan 21 00:59:04.371000 audit: BPF prog-id=73 op=UNLOAD Jan 21 00:59:04.372000 audit: BPF prog-id=93 op=LOAD Jan 21 00:59:04.373000 audit: BPF prog-id=70 op=UNLOAD Jan 21 00:59:04.375000 audit: BPF prog-id=94 op=LOAD Jan 21 00:59:04.375000 audit: BPF prog-id=83 op=UNLOAD Jan 21 00:59:04.375000 audit: BPF prog-id=95 op=LOAD Jan 21 00:59:04.375000 audit: BPF prog-id=96 op=LOAD Jan 21 00:59:04.375000 audit: BPF prog-id=84 op=UNLOAD Jan 21 00:59:04.375000 audit: BPF prog-id=85 op=UNLOAD Jan 21 00:59:04.376000 audit: BPF prog-id=97 op=LOAD Jan 21 00:59:04.376000 audit: BPF prog-id=69 op=UNLOAD Jan 21 00:59:04.387000 audit: BPF prog-id=98 op=LOAD Jan 21 00:59:04.387000 audit: BPF prog-id=99 op=LOAD Jan 21 00:59:04.387000 audit: BPF prog-id=67 op=UNLOAD Jan 21 00:59:04.387000 audit: BPF prog-id=68 op=UNLOAD Jan 21 00:59:04.388000 audit: BPF prog-id=100 op=LOAD Jan 21 00:59:04.388000 audit: BPF prog-id=74 op=UNLOAD Jan 21 00:59:04.388000 audit: BPF prog-id=101 op=LOAD Jan 21 00:59:04.388000 audit: BPF prog-id=102 op=LOAD Jan 21 00:59:04.388000 audit: BPF prog-id=75 op=UNLOAD Jan 21 00:59:04.388000 audit: BPF prog-id=76 op=UNLOAD Jan 21 00:59:04.389000 audit: BPF prog-id=103 op=LOAD Jan 21 00:59:04.389000 audit: BPF prog-id=80 op=UNLOAD Jan 21 00:59:04.389000 audit: BPF prog-id=104 op=LOAD Jan 21 00:59:04.389000 audit: BPF prog-id=105 op=LOAD Jan 21 00:59:04.389000 audit: BPF prog-id=81 op=UNLOAD Jan 21 00:59:04.389000 audit: BPF prog-id=82 op=UNLOAD Jan 21 00:59:04.390000 audit: BPF prog-id=106 op=LOAD Jan 21 00:59:04.390000 audit: BPF prog-id=86 op=UNLOAD Jan 21 00:59:04.964564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:59:04.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:04.977006 (kubelet)[3580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 00:59:05.012518 kubelet[3580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:59:05.012518 kubelet[3580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 00:59:05.012518 kubelet[3580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:59:05.012518 kubelet[3580]: I0121 00:59:05.012079 3580 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:59:05.274600 kubelet[3580]: I0121 00:59:05.274515 3580 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 00:59:05.274600 kubelet[3580]: I0121 00:59:05.274539 3580 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:59:05.274933 kubelet[3580]: I0121 00:59:05.274750 3580 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 00:59:05.306350 kubelet[3580]: E0121 00:59:05.306303 3580 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 00:59:05.307867 kubelet[3580]: I0121 00:59:05.307248 3580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 00:59:05.313965 kubelet[3580]: I0121 00:59:05.313949 3580 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:59:05.316871 kubelet[3580]: I0121 00:59:05.316851 3580 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 00:59:05.317041 kubelet[3580]: I0121 00:59:05.317020 3580 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:59:05.317169 kubelet[3580]: I0121 00:59:05.317041 3580 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-ed178c4493","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:59:05.317283 kubelet[3580]: I0121 00:59:05.317172 3580 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:59:05.317283 kubelet[3580]: I0121 00:59:05.317188 3580 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 00:59:05.318040 kubelet[3580]: I0121 00:59:05.318026 3580 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:59:05.321560 kubelet[3580]: I0121 00:59:05.321315 3580 kubelet.go:480] "Attempting to sync node with API server" Jan 21 00:59:05.321560 kubelet[3580]: I0121 00:59:05.321334 3580 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:59:05.321560 kubelet[3580]: I0121 00:59:05.321359 3580 kubelet.go:386] "Adding apiserver pod source" Jan 21 00:59:05.321560 kubelet[3580]: I0121 00:59:05.321369 3580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:59:05.334108 kubelet[3580]: I0121 00:59:05.334095 3580 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 00:59:05.334618 kubelet[3580]: I0121 00:59:05.334605 3580 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 00:59:05.335221 kubelet[3580]: W0121 00:59:05.335210 3580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 00:59:05.336490 kubelet[3580]: E0121 00:59:05.336456 3580 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:59:05.336558 kubelet[3580]: E0121 00:59:05.336545 3580 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-n-ed178c4493&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:59:05.337271 kubelet[3580]: I0121 00:59:05.337255 3580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 00:59:05.337324 kubelet[3580]: I0121 00:59:05.337294 3580 server.go:1289] "Started kubelet" Jan 21 00:59:05.338804 kubelet[3580]: I0121 00:59:05.337862 3580 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:59:05.339125 kubelet[3580]: I0121 00:59:05.339114 3580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:59:05.341685 kubelet[3580]: I0121 00:59:05.341634 3580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:59:05.341918 kubelet[3580]: I0121 00:59:05.341903 3580 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:59:05.344000 audit[3595]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.344000 audit[3595]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffefa178870 a2=0 a3=0 items=0 ppid=3580 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 00:59:05.345484 kubelet[3580]: E0121 00:59:05.344016 3580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-n-ed178c4493.188c9926beb789b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-n-ed178c4493,UID:ci-4547.0.0-n-ed178c4493,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-n-ed178c4493,},FirstTimestamp:2026-01-21 00:59:05.337268657 +0000 UTC m=+0.357398729,LastTimestamp:2026-01-21 00:59:05.337268657 +0000 UTC m=+0.357398729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-n-ed178c4493,}" Jan 21 00:59:05.346726 kubelet[3580]: I0121 00:59:05.346696 3580 server.go:317] "Adding debug handlers to kubelet server" Jan 21 00:59:05.346000 audit[3596]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.346000 audit[3596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb92969d0 a2=0 a3=0 items=0 ppid=3580 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 00:59:05.347635 kubelet[3580]: I0121 00:59:05.347613 3580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 00:59:05.348498 kubelet[3580]: I0121 00:59:05.348485 3580 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 00:59:05.348785 kubelet[3580]: E0121 00:59:05.348760 3580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-n-ed178c4493\" not found" Jan 21 00:59:05.349857 kubelet[3580]: E0121 00:59:05.349429 3580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-ed178c4493?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jan 21 00:59:05.349857 kubelet[3580]: I0121 00:59:05.349729 3580 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:59:05.349857 kubelet[3580]: I0121 00:59:05.349757 3580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 00:59:05.350000 audit[3598]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.351361 kubelet[3580]: E0121 00:59:05.350136 3580 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:59:05.351361 kubelet[3580]: I0121 00:59:05.350408 3580 factory.go:223] Registration of the systemd container factory successfully Jan 21 00:59:05.351361 kubelet[3580]: I0121 00:59:05.350489 3580 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 00:59:05.351361 kubelet[3580]: E0121 00:59:05.351352 3580 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 00:59:05.350000 audit[3598]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd81e1d700 a2=0 a3=0 items=0 ppid=3580 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:59:05.351837 kubelet[3580]: I0121 00:59:05.351823 3580 factory.go:223] Registration of the containerd container factory successfully Jan 21 00:59:05.353000 audit[3600]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.353000 audit[3600]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4b791a70 a2=0 a3=0 items=0 ppid=3580 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:59:05.366435 kubelet[3580]: I0121 00:59:05.366417 3580 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 00:59:05.366435 kubelet[3580]: I0121 00:59:05.366428 3580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 00:59:05.366536 kubelet[3580]: I0121 00:59:05.366443 3580 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:59:05.370675 kubelet[3580]: I0121 00:59:05.370657 3580 policy_none.go:49] "None policy: Start" Jan 21 00:59:05.370675 kubelet[3580]: I0121 00:59:05.370675 3580 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 00:59:05.370751 kubelet[3580]: I0121 00:59:05.370685 3580 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:59:05.377586 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 00:59:05.382000 audit[3606]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.382000 audit[3606]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe7b5a8d20 a2=0 a3=0 items=0 ppid=3580 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 00:59:05.384872 kubelet[3580]: I0121 00:59:05.384379 3580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 00:59:05.385000 audit[3609]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:05.385000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe787672d0 a2=0 a3=0 items=0 ppid=3580 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 00:59:05.386829 kubelet[3580]: I0121 00:59:05.386816 3580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 00:59:05.386895 kubelet[3580]: I0121 00:59:05.386890 3580 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 00:59:05.386946 kubelet[3580]: I0121 00:59:05.386941 3580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 00:59:05.386984 kubelet[3580]: I0121 00:59:05.386979 3580 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 00:59:05.387075 kubelet[3580]: E0121 00:59:05.387062 3580 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:59:05.388050 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 00:59:05.387000 audit[3610]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.387000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc95fcb160 a2=0 a3=0 items=0 ppid=3580 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 00:59:05.388000 audit[3611]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:05.388000 audit[3611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffede398d90 a2=0 a3=0 items=0 ppid=3580 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 00:59:05.390000 kubelet[3580]: E0121 00:59:05.389984 3580 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:59:05.390000 audit[3613]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.390000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc508e2c20 a2=0 a3=0 items=0 ppid=3580 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 00:59:05.391000 audit[3612]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:05.391000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe82ada7a0 a2=0 a3=0 items=0 ppid=3580 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 00:59:05.393255 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 00:59:05.394000 audit[3614]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_chain pid=3614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:05.394000 audit[3614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc32182280 a2=0 a3=0 items=0 ppid=3580 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 00:59:05.394000 audit[3615]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:05.394000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff713b9950 a2=0 a3=0 items=0 ppid=3580 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 00:59:05.399315 kubelet[3580]: E0121 00:59:05.399291 3580 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 00:59:05.399433 kubelet[3580]: I0121 00:59:05.399421 3580 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:59:05.399463 kubelet[3580]: I0121 00:59:05.399436 3580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:59:05.400003 kubelet[3580]: I0121 00:59:05.399972 3580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:59:05.400529 kubelet[3580]: E0121 00:59:05.400511 3580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 00:59:05.400599 kubelet[3580]: E0121 00:59:05.400544 3580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-n-ed178c4493\" not found" Jan 21 00:59:05.500711 kubelet[3580]: I0121 00:59:05.500688 3580 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.500964 kubelet[3580]: E0121 00:59:05.500944 3580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.509595 systemd[1]: Created slice kubepods-burstable-pode63b4b5c0216a331412ca30561ee7786.slice - libcontainer container kubepods-burstable-pode63b4b5c0216a331412ca30561ee7786.slice. Jan 21 00:59:05.519287 kubelet[3580]: E0121 00:59:05.519264 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.522555 systemd[1]: Created slice kubepods-burstable-podd087032073810708ca3d972380c61d05.slice - libcontainer container kubepods-burstable-podd087032073810708ca3d972380c61d05.slice. Jan 21 00:59:05.524549 kubelet[3580]: E0121 00:59:05.524519 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.526181 systemd[1]: Created slice kubepods-burstable-podaef6e0249e6394425e7e57a41002743d.slice - libcontainer container kubepods-burstable-podaef6e0249e6394425e7e57a41002743d.slice. Jan 21 00:59:05.528866 kubelet[3580]: E0121 00:59:05.528849 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.550166 kubelet[3580]: E0121 00:59:05.550134 3580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-ed178c4493?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jan 21 00:59:05.551269 kubelet[3580]: I0121 00:59:05.551251 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551374 kubelet[3580]: I0121 00:59:05.551278 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef6e0249e6394425e7e57a41002743d-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-ed178c4493\" (UID: \"aef6e0249e6394425e7e57a41002743d\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551374 kubelet[3580]: I0121 00:59:05.551296 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551374 kubelet[3580]: I0121 00:59:05.551314 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551374 kubelet[3580]: I0121 00:59:05.551329 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551374 kubelet[3580]: I0121 00:59:05.551354 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551501 kubelet[3580]: I0121 00:59:05.551375 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551501 kubelet[3580]: I0121 00:59:05.551393 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.551501 kubelet[3580]: I0121 00:59:05.551407 3580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.702910 kubelet[3580]: I0121 00:59:05.702879 3580 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.703162 kubelet[3580]: E0121 00:59:05.703130 3580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:05.820793 containerd[2473]: time="2026-01-21T00:59:05.820677879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-ed178c4493,Uid:e63b4b5c0216a331412ca30561ee7786,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:05.826231 containerd[2473]: time="2026-01-21T00:59:05.826107156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-ed178c4493,Uid:d087032073810708ca3d972380c61d05,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:05.830394 containerd[2473]: time="2026-01-21T00:59:05.830361516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-ed178c4493,Uid:aef6e0249e6394425e7e57a41002743d,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:05.934525 containerd[2473]: time="2026-01-21T00:59:05.934479462Z" level=info msg="connecting to shim 13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70" address="unix:///run/containerd/s/9d2ea88e69827439f00f365c2ba84214852682e6ea6c73928b1388b420fa3394" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:05.936425 containerd[2473]: time="2026-01-21T00:59:05.936044398Z" level=info msg="connecting to shim 26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e" address="unix:///run/containerd/s/6912d936f4ca6cd6e9e738d54f9a2bd55bed3d1f96747d80c6d98582f61f58b9" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:05.951364 kubelet[3580]: E0121 00:59:05.951307 3580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-n-ed178c4493?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jan 21 00:59:05.965271 containerd[2473]: time="2026-01-21T00:59:05.965207107Z" level=info msg="connecting to shim a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40" address="unix:///run/containerd/s/10734e8a0873458119a4f89d6e3ec714f18337b77c6ae50bf843b8c40a6d7b72" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:05.966977 systemd[1]: Started cri-containerd-26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e.scope - libcontainer container 26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e. Jan 21 00:59:05.971736 systemd[1]: Started cri-containerd-13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70.scope - libcontainer container 13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70. Jan 21 00:59:05.989000 audit: BPF prog-id=107 op=LOAD Jan 21 00:59:05.989000 audit: BPF prog-id=108 op=LOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.989000 audit: BPF prog-id=108 op=UNLOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.989000 audit: BPF prog-id=109 op=LOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.989000 audit: BPF prog-id=110 op=LOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.989000 audit: BPF prog-id=110 op=UNLOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.989000 audit: BPF prog-id=109 op=UNLOAD Jan 21 00:59:05.989000 audit[3654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.990000 audit: BPF prog-id=111 op=LOAD Jan 21 00:59:05.990000 audit[3654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3642 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236663037373038653830623461333263323366663731356330633839 Jan 21 00:59:05.994000 audit: BPF prog-id=112 op=LOAD Jan 21 00:59:05.994000 audit: BPF prog-id=113 op=LOAD Jan 21 00:59:05.994000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=113 op=UNLOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=114 op=LOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=115 op=LOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=115 op=UNLOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=114 op=UNLOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:05.995000 audit: BPF prog-id=116 op=LOAD Jan 21 00:59:05.995000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3625 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633330663264616337306436336134333461366235353766373863 Jan 21 00:59:06.006508 systemd[1]: Started cri-containerd-a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40.scope - libcontainer container a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40. Jan 21 00:59:06.023000 audit: BPF prog-id=117 op=LOAD Jan 21 00:59:06.024000 audit: BPF prog-id=118 op=LOAD Jan 21 00:59:06.024000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.024000 audit: BPF prog-id=118 op=UNLOAD Jan 21 00:59:06.024000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.024000 audit: BPF prog-id=119 op=LOAD Jan 21 00:59:06.024000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.024000 audit: BPF prog-id=120 op=LOAD Jan 21 00:59:06.024000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.024000 audit: BPF prog-id=120 op=UNLOAD Jan 21 00:59:06.024000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.025000 audit: BPF prog-id=119 op=UNLOAD Jan 21 00:59:06.025000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.025000 audit: BPF prog-id=121 op=LOAD Jan 21 00:59:06.025000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3681 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132323732363338653432623832333166356462613133623339326331 Jan 21 00:59:06.058135 containerd[2473]: time="2026-01-21T00:59:06.058108560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-n-ed178c4493,Uid:d087032073810708ca3d972380c61d05,Namespace:kube-system,Attempt:0,} returns sandbox id \"26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e\"" Jan 21 00:59:06.061468 containerd[2473]: time="2026-01-21T00:59:06.061444309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-n-ed178c4493,Uid:e63b4b5c0216a331412ca30561ee7786,Namespace:kube-system,Attempt:0,} returns sandbox id \"13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70\"" Jan 21 00:59:06.066333 containerd[2473]: time="2026-01-21T00:59:06.066300958Z" level=info msg="CreateContainer within sandbox \"26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 00:59:06.071047 containerd[2473]: time="2026-01-21T00:59:06.070996224Z" level=info msg="CreateContainer within sandbox \"13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 00:59:06.080688 containerd[2473]: time="2026-01-21T00:59:06.080666034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-n-ed178c4493,Uid:aef6e0249e6394425e7e57a41002743d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40\"" Jan 21 00:59:06.086110 containerd[2473]: time="2026-01-21T00:59:06.086079839Z" level=info msg="CreateContainer within sandbox \"a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 00:59:06.102838 containerd[2473]: time="2026-01-21T00:59:06.102813322Z" level=info msg="Container 65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:06.104604 kubelet[3580]: I0121 00:59:06.104582 3580 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:06.104943 kubelet[3580]: E0121 00:59:06.104871 3580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:06.123520 containerd[2473]: time="2026-01-21T00:59:06.123496245Z" level=info msg="Container 2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:06.144658 containerd[2473]: time="2026-01-21T00:59:06.144631424Z" level=info msg="CreateContainer within sandbox \"26f07708e80b4a32c23ff715c0c89a24308a1067a07eae8d53266ee6274eb75e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a\"" Jan 21 00:59:06.145087 containerd[2473]: time="2026-01-21T00:59:06.145064995Z" level=info msg="StartContainer for \"65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a\"" Jan 21 00:59:06.145748 containerd[2473]: time="2026-01-21T00:59:06.145723975Z" level=info msg="connecting to shim 65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a" address="unix:///run/containerd/s/6912d936f4ca6cd6e9e738d54f9a2bd55bed3d1f96747d80c6d98582f61f58b9" protocol=ttrpc version=3 Jan 21 00:59:06.148799 containerd[2473]: time="2026-01-21T00:59:06.148339664Z" level=info msg="Container eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:06.155570 containerd[2473]: time="2026-01-21T00:59:06.155537972Z" level=info msg="CreateContainer within sandbox \"13c30f2dac70d63a434a6b557f78cbe581d165014910c8a7aadb8b0a872e2f70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750\"" Jan 21 00:59:06.156092 containerd[2473]: time="2026-01-21T00:59:06.156066625Z" level=info msg="StartContainer for \"2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750\"" Jan 21 00:59:06.156901 containerd[2473]: time="2026-01-21T00:59:06.156877636Z" level=info msg="connecting to shim 2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750" address="unix:///run/containerd/s/9d2ea88e69827439f00f365c2ba84214852682e6ea6c73928b1388b420fa3394" protocol=ttrpc version=3 Jan 21 00:59:06.162061 systemd[1]: Started cri-containerd-65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a.scope - libcontainer container 65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a. Jan 21 00:59:06.163830 containerd[2473]: time="2026-01-21T00:59:06.163806676Z" level=info msg="CreateContainer within sandbox \"a2272638e42b8231f5dba13b392c14a1e5ef04f1d97d91052c7ee4342a6bbc40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17\"" Jan 21 00:59:06.164397 containerd[2473]: time="2026-01-21T00:59:06.164366512Z" level=info msg="StartContainer for \"eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17\"" Jan 21 00:59:06.165586 containerd[2473]: time="2026-01-21T00:59:06.165554522Z" level=info msg="connecting to shim eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17" address="unix:///run/containerd/s/10734e8a0873458119a4f89d6e3ec714f18337b77c6ae50bf843b8c40a6d7b72" protocol=ttrpc version=3 Jan 21 00:59:06.177944 systemd[1]: Started cri-containerd-2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750.scope - libcontainer container 2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750. Jan 21 00:59:06.181000 audit: BPF prog-id=122 op=LOAD Jan 21 00:59:06.182000 audit: BPF prog-id=123 op=LOAD Jan 21 00:59:06.182000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.183000 audit: BPF prog-id=123 op=UNLOAD Jan 21 00:59:06.183000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.183000 audit: BPF prog-id=124 op=LOAD Jan 21 00:59:06.183000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.183000 audit: BPF prog-id=125 op=LOAD Jan 21 00:59:06.183000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.183000 audit: BPF prog-id=125 op=UNLOAD Jan 21 00:59:06.183000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.183000 audit: BPF prog-id=124 op=UNLOAD Jan 21 00:59:06.183000 audit[3756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.184000 audit: BPF prog-id=126 op=LOAD Jan 21 00:59:06.184000 audit[3756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3642 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613431346537613330323033353434393033373032376339643062 Jan 21 00:59:06.190886 systemd[1]: Started cri-containerd-eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17.scope - libcontainer container eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17. Jan 21 00:59:06.196000 audit: BPF prog-id=127 op=LOAD Jan 21 00:59:06.197000 audit: BPF prog-id=128 op=LOAD Jan 21 00:59:06.197000 audit[3768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.197000 audit: BPF prog-id=128 op=UNLOAD Jan 21 00:59:06.197000 audit[3768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.198000 audit: BPF prog-id=129 op=LOAD Jan 21 00:59:06.198000 audit[3768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.198000 audit: BPF prog-id=130 op=LOAD Jan 21 00:59:06.198000 audit[3768]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.198000 audit: BPF prog-id=130 op=UNLOAD Jan 21 00:59:06.198000 audit[3768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.198000 audit: BPF prog-id=129 op=UNLOAD Jan 21 00:59:06.198000 audit[3768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.198000 audit: BPF prog-id=131 op=LOAD Jan 21 00:59:06.198000 audit[3768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3625 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643263313435316666343633626131663531383935313865343235 Jan 21 00:59:06.218000 audit: BPF prog-id=132 op=LOAD Jan 21 00:59:06.221000 audit: BPF prog-id=133 op=LOAD Jan 21 00:59:06.221000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.221000 audit: BPF prog-id=133 op=UNLOAD Jan 21 00:59:06.221000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.221000 audit: BPF prog-id=134 op=LOAD Jan 21 00:59:06.221000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.222000 audit: BPF prog-id=135 op=LOAD Jan 21 00:59:06.222000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.222000 audit: BPF prog-id=135 op=UNLOAD Jan 21 00:59:06.222000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.222000 audit: BPF prog-id=134 op=UNLOAD Jan 21 00:59:06.222000 audit[3780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.222000 audit: BPF prog-id=136 op=LOAD Jan 21 00:59:06.222000 audit[3780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3681 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:06.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653231633263363765333265323832346238386636613135353832 Jan 21 00:59:06.253844 containerd[2473]: time="2026-01-21T00:59:06.253297567Z" level=info msg="StartContainer for \"2ed2c1451ff463ba1f5189518e42590a763130d75fa3cb953849122218189750\" returns successfully" Jan 21 00:59:06.255615 containerd[2473]: time="2026-01-21T00:59:06.255595278Z" level=info msg="StartContainer for \"65a414e7a302035449037027c9d0b592506af797bba9a9ed36073ca85f76ba1a\" returns successfully" Jan 21 00:59:06.282841 containerd[2473]: time="2026-01-21T00:59:06.282815324Z" level=info msg="StartContainer for \"eae21c2c67e32e2824b88f6a1558297e30f1e3b601f1dffe7a7178fb608b1d17\" returns successfully" Jan 21 00:59:06.396587 kubelet[3580]: E0121 00:59:06.396419 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:06.403795 kubelet[3580]: E0121 00:59:06.403446 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:06.414194 kubelet[3580]: E0121 00:59:06.414079 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:06.907796 kubelet[3580]: I0121 00:59:06.906855 3580 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:07.411576 kubelet[3580]: E0121 00:59:07.411181 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:07.411576 kubelet[3580]: E0121 00:59:07.411477 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.415208 kubelet[3580]: E0121 00:59:08.415008 3580 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.747738 kubelet[3580]: E0121 00:59:08.747480 3580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-n-ed178c4493\" not found" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.849660 kubelet[3580]: I0121 00:59:08.849437 3580 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.849660 kubelet[3580]: E0121 00:59:08.849476 3580 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-n-ed178c4493\": node \"ci-4547.0.0-n-ed178c4493\" not found" Jan 21 00:59:08.949611 kubelet[3580]: I0121 00:59:08.949585 3580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.956786 kubelet[3580]: E0121 00:59:08.956754 3580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.956861 kubelet[3580]: I0121 00:59:08.956809 3580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.958106 kubelet[3580]: E0121 00:59:08.958076 3580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.958192 kubelet[3580]: I0121 00:59:08.958112 3580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:08.959369 kubelet[3580]: E0121 00:59:08.959310 3580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-ed178c4493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:09.336556 kubelet[3580]: I0121 00:59:09.336523 3580 apiserver.go:52] "Watching apiserver" Jan 21 00:59:09.350686 kubelet[3580]: I0121 00:59:09.350653 3580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 00:59:10.927714 systemd[1]: Reload requested from client PID 3858 ('systemctl') (unit session-10.scope)... Jan 21 00:59:10.927728 systemd[1]: Reloading... Jan 21 00:59:10.992895 zram_generator::config[3904]: No configuration found. Jan 21 00:59:11.205624 systemd[1]: Reloading finished in 277 ms. Jan 21 00:59:11.234701 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:59:11.251584 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 00:59:11.251877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:59:11.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:11.252035 systemd[1]: kubelet.service: Consumed 654ms CPU time, 130.7M memory peak. Jan 21 00:59:11.255798 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 21 00:59:11.255872 kernel: audit: type=1131 audit(1768957151.251:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:11.256910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:59:11.256000 audit: BPF prog-id=137 op=LOAD Jan 21 00:59:11.260785 kernel: audit: type=1334 audit(1768957151.256:423): prog-id=137 op=LOAD Jan 21 00:59:11.256000 audit: BPF prog-id=87 op=UNLOAD Jan 21 00:59:11.256000 audit: BPF prog-id=138 op=LOAD Jan 21 00:59:11.263571 kernel: audit: type=1334 audit(1768957151.256:424): prog-id=87 op=UNLOAD Jan 21 00:59:11.263603 kernel: audit: type=1334 audit(1768957151.256:425): prog-id=138 op=LOAD Jan 21 00:59:11.256000 audit: BPF prog-id=139 op=LOAD Jan 21 00:59:11.264898 kernel: audit: type=1334 audit(1768957151.256:426): prog-id=139 op=LOAD Jan 21 00:59:11.256000 audit: BPF prog-id=88 op=UNLOAD Jan 21 00:59:11.266290 kernel: audit: type=1334 audit(1768957151.256:427): prog-id=88 op=UNLOAD Jan 21 00:59:11.256000 audit: BPF prog-id=89 op=UNLOAD Jan 21 00:59:11.271677 kernel: audit: type=1334 audit(1768957151.256:428): prog-id=89 op=UNLOAD Jan 21 00:59:11.271732 kernel: audit: type=1334 audit(1768957151.257:429): prog-id=140 op=LOAD Jan 21 00:59:11.257000 audit: BPF prog-id=140 op=LOAD Jan 21 00:59:11.257000 audit: BPF prog-id=97 op=UNLOAD Jan 21 00:59:11.273811 kernel: audit: type=1334 audit(1768957151.257:430): prog-id=97 op=UNLOAD Jan 21 00:59:11.276146 kernel: audit: type=1334 audit(1768957151.266:431): prog-id=141 op=LOAD Jan 21 00:59:11.266000 audit: BPF prog-id=141 op=LOAD Jan 21 00:59:11.266000 audit: BPF prog-id=90 op=UNLOAD Jan 21 00:59:11.266000 audit: BPF prog-id=142 op=LOAD Jan 21 00:59:11.266000 audit: BPF prog-id=143 op=LOAD Jan 21 00:59:11.266000 audit: BPF prog-id=91 op=UNLOAD Jan 21 00:59:11.266000 audit: BPF prog-id=92 op=UNLOAD Jan 21 00:59:11.267000 audit: BPF prog-id=144 op=LOAD Jan 21 00:59:11.267000 audit: BPF prog-id=93 op=UNLOAD Jan 21 00:59:11.268000 audit: BPF prog-id=145 op=LOAD Jan 21 00:59:11.268000 audit: BPF prog-id=100 op=UNLOAD Jan 21 00:59:11.268000 audit: BPF prog-id=146 op=LOAD Jan 21 00:59:11.268000 audit: BPF prog-id=147 op=LOAD Jan 21 00:59:11.268000 audit: BPF prog-id=101 op=UNLOAD Jan 21 00:59:11.268000 audit: BPF prog-id=102 op=UNLOAD Jan 21 00:59:11.268000 audit: BPF prog-id=148 op=LOAD Jan 21 00:59:11.268000 audit: BPF prog-id=149 op=LOAD Jan 21 00:59:11.268000 audit: BPF prog-id=98 op=UNLOAD Jan 21 00:59:11.268000 audit: BPF prog-id=99 op=UNLOAD Jan 21 00:59:11.269000 audit: BPF prog-id=150 op=LOAD Jan 21 00:59:11.269000 audit: BPF prog-id=106 op=UNLOAD Jan 21 00:59:11.271000 audit: BPF prog-id=151 op=LOAD Jan 21 00:59:11.271000 audit: BPF prog-id=94 op=UNLOAD Jan 21 00:59:11.271000 audit: BPF prog-id=152 op=LOAD Jan 21 00:59:11.271000 audit: BPF prog-id=153 op=LOAD Jan 21 00:59:11.271000 audit: BPF prog-id=95 op=UNLOAD Jan 21 00:59:11.271000 audit: BPF prog-id=96 op=UNLOAD Jan 21 00:59:11.272000 audit: BPF prog-id=154 op=LOAD Jan 21 00:59:11.272000 audit: BPF prog-id=103 op=UNLOAD Jan 21 00:59:11.272000 audit: BPF prog-id=155 op=LOAD Jan 21 00:59:11.272000 audit: BPF prog-id=156 op=LOAD Jan 21 00:59:11.272000 audit: BPF prog-id=104 op=UNLOAD Jan 21 00:59:11.272000 audit: BPF prog-id=105 op=UNLOAD Jan 21 00:59:11.776887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:59:11.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:11.784029 (kubelet)[3975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 00:59:11.822926 kubelet[3975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:59:11.822926 kubelet[3975]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 00:59:11.822926 kubelet[3975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:59:11.822926 kubelet[3975]: I0121 00:59:11.822834 3975 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:59:11.832544 kubelet[3975]: I0121 00:59:11.832520 3975 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 00:59:11.832544 kubelet[3975]: I0121 00:59:11.832539 3975 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:59:11.832724 kubelet[3975]: I0121 00:59:11.832711 3975 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 00:59:11.834128 kubelet[3975]: I0121 00:59:11.833980 3975 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 00:59:11.836207 kubelet[3975]: I0121 00:59:11.836192 3975 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 00:59:11.840739 kubelet[3975]: I0121 00:59:11.840720 3975 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:59:11.844217 kubelet[3975]: I0121 00:59:11.843625 3975 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 00:59:11.844436 kubelet[3975]: I0121 00:59:11.844418 3975 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:59:11.844715 kubelet[3975]: I0121 00:59:11.844472 3975 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-n-ed178c4493","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:59:11.844861 kubelet[3975]: I0121 00:59:11.844847 3975 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:59:11.844904 kubelet[3975]: I0121 00:59:11.844900 3975 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 00:59:11.844973 kubelet[3975]: I0121 00:59:11.844969 3975 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:59:11.845123 kubelet[3975]: I0121 00:59:11.845117 3975 kubelet.go:480] "Attempting to sync node with API server" Jan 21 00:59:11.845164 kubelet[3975]: I0121 00:59:11.845159 3975 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:59:11.845211 kubelet[3975]: I0121 00:59:11.845207 3975 kubelet.go:386] "Adding apiserver pod source" Jan 21 00:59:11.845249 kubelet[3975]: I0121 00:59:11.845245 3975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:59:11.849106 kubelet[3975]: I0121 00:59:11.849065 3975 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 00:59:11.849567 kubelet[3975]: I0121 00:59:11.849489 3975 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 00:59:11.854996 kubelet[3975]: I0121 00:59:11.854981 3975 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 00:59:11.855076 kubelet[3975]: I0121 00:59:11.855023 3975 server.go:1289] "Started kubelet" Jan 21 00:59:11.857944 kubelet[3975]: I0121 00:59:11.857926 3975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:59:11.868518 kubelet[3975]: I0121 00:59:11.867835 3975 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:59:11.868939 kubelet[3975]: I0121 00:59:11.868845 3975 server.go:317] "Adding debug handlers to kubelet server" Jan 21 00:59:11.872568 kubelet[3975]: I0121 00:59:11.871554 3975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:59:11.872568 kubelet[3975]: I0121 00:59:11.871718 3975 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:59:11.872568 kubelet[3975]: I0121 00:59:11.871909 3975 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 00:59:11.874280 kubelet[3975]: I0121 00:59:11.874219 3975 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 00:59:11.875152 kubelet[3975]: I0121 00:59:11.875132 3975 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 00:59:11.876088 kubelet[3975]: I0121 00:59:11.875224 3975 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:59:11.876396 kubelet[3975]: I0121 00:59:11.876378 3975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 00:59:11.878067 kubelet[3975]: I0121 00:59:11.878052 3975 factory.go:223] Registration of the systemd container factory successfully Jan 21 00:59:11.878147 kubelet[3975]: I0121 00:59:11.878132 3975 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 00:59:11.878488 kubelet[3975]: I0121 00:59:11.878475 3975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 00:59:11.878553 kubelet[3975]: I0121 00:59:11.878548 3975 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 00:59:11.879215 kubelet[3975]: I0121 00:59:11.879115 3975 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 00:59:11.879215 kubelet[3975]: I0121 00:59:11.879128 3975 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 00:59:11.879215 kubelet[3975]: E0121 00:59:11.879170 3975 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:59:11.883011 kubelet[3975]: E0121 00:59:11.882827 3975 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 00:59:11.883698 kubelet[3975]: I0121 00:59:11.883681 3975 factory.go:223] Registration of the containerd container factory successfully Jan 21 00:59:11.917728 kubelet[3975]: I0121 00:59:11.917630 3975 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 00:59:11.917892 kubelet[3975]: I0121 00:59:11.917827 3975 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 00:59:11.917892 kubelet[3975]: I0121 00:59:11.917846 3975 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.917959 3975 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.917970 3975 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.917985 3975 policy_none.go:49] "None policy: Start" Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.917995 3975 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.918003 3975 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:59:11.918792 kubelet[3975]: I0121 00:59:11.918084 3975 state_mem.go:75] "Updated machine memory state" Jan 21 00:59:11.923157 kubelet[3975]: E0121 00:59:11.922896 3975 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 00:59:11.923157 kubelet[3975]: I0121 00:59:11.923016 3975 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:59:11.923157 kubelet[3975]: I0121 00:59:11.923025 3975 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:59:11.927870 kubelet[3975]: I0121 00:59:11.926528 3975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:59:11.930591 kubelet[3975]: E0121 00:59:11.930575 3975 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 00:59:11.980536 kubelet[3975]: I0121 00:59:11.980477 3975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:11.981303 kubelet[3975]: I0121 00:59:11.981059 3975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:11.981622 kubelet[3975]: I0121 00:59:11.981457 3975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:11.989253 kubelet[3975]: I0121 00:59:11.989211 3975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 00:59:11.993600 kubelet[3975]: I0121 00:59:11.993584 3975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 00:59:11.994311 kubelet[3975]: I0121 00:59:11.994223 3975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 00:59:12.031180 kubelet[3975]: I0121 00:59:12.030267 3975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.044292 kubelet[3975]: I0121 00:59:12.044043 3975 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.044292 kubelet[3975]: I0121 00:59:12.044107 3975 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.076974 kubelet[3975]: I0121 00:59:12.076953 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.077092 kubelet[3975]: I0121 00:59:12.077051 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.077092 kubelet[3975]: I0121 00:59:12.077072 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.077092 kubelet[3975]: I0121 00:59:12.077088 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.077202 kubelet[3975]: I0121 00:59:12.077117 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef6e0249e6394425e7e57a41002743d-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-n-ed178c4493\" (UID: \"aef6e0249e6394425e7e57a41002743d\") " pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.078120 kubelet[3975]: I0121 00:59:12.077497 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.078120 kubelet[3975]: I0121 00:59:12.077526 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e63b4b5c0216a331412ca30561ee7786-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-n-ed178c4493\" (UID: \"e63b4b5c0216a331412ca30561ee7786\") " pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.078120 kubelet[3975]: I0121 00:59:12.077547 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.078120 kubelet[3975]: I0121 00:59:12.077577 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d087032073810708ca3d972380c61d05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-n-ed178c4493\" (UID: \"d087032073810708ca3d972380c61d05\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.848126 kubelet[3975]: I0121 00:59:12.848095 3975 apiserver.go:52] "Watching apiserver" Jan 21 00:59:12.875264 kubelet[3975]: I0121 00:59:12.875237 3975 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 00:59:12.906288 kubelet[3975]: I0121 00:59:12.904936 3975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.910472 kubelet[3975]: I0121 00:59:12.910447 3975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 21 00:59:12.910563 kubelet[3975]: E0121 00:59:12.910511 3975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-n-ed178c4493\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" Jan 21 00:59:12.927401 kubelet[3975]: I0121 00:59:12.926939 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-n-ed178c4493" podStartSLOduration=1.926927084 podStartE2EDuration="1.926927084s" podCreationTimestamp="2026-01-21 00:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:12.926888517 +0000 UTC m=+1.139617065" watchObservedRunningTime="2026-01-21 00:59:12.926927084 +0000 UTC m=+1.139655623" Jan 21 00:59:12.927401 kubelet[3975]: I0121 00:59:12.927308 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-n-ed178c4493" podStartSLOduration=1.9272972 podStartE2EDuration="1.9272972s" podCreationTimestamp="2026-01-21 00:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:12.920136805 +0000 UTC m=+1.132865355" watchObservedRunningTime="2026-01-21 00:59:12.9272972 +0000 UTC m=+1.140025755" Jan 21 00:59:12.934662 kubelet[3975]: I0121 00:59:12.934621 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-n-ed178c4493" podStartSLOduration=1.934608962 podStartE2EDuration="1.934608962s" podCreationTimestamp="2026-01-21 00:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:12.934417814 +0000 UTC m=+1.147146387" watchObservedRunningTime="2026-01-21 00:59:12.934608962 +0000 UTC m=+1.147337511" Jan 21 00:59:16.141647 kubelet[3975]: I0121 00:59:16.141616 3975 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 00:59:16.142089 containerd[2473]: time="2026-01-21T00:59:16.142004344Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 00:59:16.142288 kubelet[3975]: I0121 00:59:16.142158 3975 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 00:59:16.958927 systemd[1]: Created slice kubepods-besteffort-podf5d1dfea_7a66_4e2e_9bc2_c8411e3b4e61.slice - libcontainer container kubepods-besteffort-podf5d1dfea_7a66_4e2e_9bc2_c8411e3b4e61.slice. Jan 21 00:59:17.012608 kubelet[3975]: I0121 00:59:17.012582 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-kube-proxy\") pod \"kube-proxy-5bnks\" (UID: \"f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61\") " pod="kube-system/kube-proxy-5bnks" Jan 21 00:59:17.012771 kubelet[3975]: I0121 00:59:17.012739 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-xtables-lock\") pod \"kube-proxy-5bnks\" (UID: \"f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61\") " pod="kube-system/kube-proxy-5bnks" Jan 21 00:59:17.012820 kubelet[3975]: I0121 00:59:17.012789 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-lib-modules\") pod \"kube-proxy-5bnks\" (UID: \"f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61\") " pod="kube-system/kube-proxy-5bnks" Jan 21 00:59:17.012820 kubelet[3975]: I0121 00:59:17.012809 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2w4\" (UniqueName: \"kubernetes.io/projected/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-kube-api-access-pg2w4\") pod \"kube-proxy-5bnks\" (UID: \"f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61\") " pod="kube-system/kube-proxy-5bnks" Jan 21 00:59:17.116639 kubelet[3975]: E0121 00:59:17.116609 3975 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 21 00:59:17.116639 kubelet[3975]: E0121 00:59:17.116633 3975 projected.go:194] Error preparing data for projected volume kube-api-access-pg2w4 for pod kube-system/kube-proxy-5bnks: configmap "kube-root-ca.crt" not found Jan 21 00:59:17.116850 kubelet[3975]: E0121 00:59:17.116695 3975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-kube-api-access-pg2w4 podName:f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61 nodeName:}" failed. No retries permitted until 2026-01-21 00:59:17.616675244 +0000 UTC m=+5.829403781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pg2w4" (UniqueName: "kubernetes.io/projected/f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61-kube-api-access-pg2w4") pod "kube-proxy-5bnks" (UID: "f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61") : configmap "kube-root-ca.crt" not found Jan 21 00:59:17.425665 systemd[1]: Created slice kubepods-besteffort-pod87c14d69_ea86_497d_a362_b5f3757d3737.slice - libcontainer container kubepods-besteffort-pod87c14d69_ea86_497d_a362_b5f3757d3737.slice. Jan 21 00:59:17.515923 kubelet[3975]: I0121 00:59:17.515858 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87c14d69-ea86-497d-a362-b5f3757d3737-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lh5ph\" (UID: \"87c14d69-ea86-497d-a362-b5f3757d3737\") " pod="tigera-operator/tigera-operator-7dcd859c48-lh5ph" Jan 21 00:59:17.515923 kubelet[3975]: I0121 00:59:17.515892 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzvg\" (UniqueName: \"kubernetes.io/projected/87c14d69-ea86-497d-a362-b5f3757d3737-kube-api-access-2xzvg\") pod \"tigera-operator-7dcd859c48-lh5ph\" (UID: \"87c14d69-ea86-497d-a362-b5f3757d3737\") " pod="tigera-operator/tigera-operator-7dcd859c48-lh5ph" Jan 21 00:59:17.729924 containerd[2473]: time="2026-01-21T00:59:17.729833866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lh5ph,Uid:87c14d69-ea86-497d-a362-b5f3757d3737,Namespace:tigera-operator,Attempt:0,}" Jan 21 00:59:17.761804 containerd[2473]: time="2026-01-21T00:59:17.761717284Z" level=info msg="connecting to shim fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088" address="unix:///run/containerd/s/dea3505578a4867f4dd3bbc959f7f04732f65b08f81c342e74c6e8d3abca110f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:17.786963 systemd[1]: Started cri-containerd-fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088.scope - libcontainer container fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088. Jan 21 00:59:17.798321 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 21 00:59:17.798396 kernel: audit: type=1334 audit(1768957157.793:464): prog-id=157 op=LOAD Jan 21 00:59:17.793000 audit: BPF prog-id=157 op=LOAD Jan 21 00:59:17.797000 audit: BPF prog-id=158 op=LOAD Jan 21 00:59:17.800799 kernel: audit: type=1334 audit(1768957157.797:465): prog-id=158 op=LOAD Jan 21 00:59:17.800862 kernel: audit: type=1300 audit(1768957157.797:465): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.811866 kernel: audit: type=1327 audit(1768957157.797:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=158 op=UNLOAD Jan 21 00:59:17.821559 kernel: audit: type=1334 audit(1768957157.797:466): prog-id=158 op=UNLOAD Jan 21 00:59:17.821616 kernel: audit: type=1300 audit(1768957157.797:466): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.829798 kernel: audit: type=1327 audit(1768957157.797:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=159 op=LOAD Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.836468 kernel: audit: type=1334 audit(1768957157.797:467): prog-id=159 op=LOAD Jan 21 00:59:17.836515 kernel: audit: type=1300 audit(1768957157.797:467): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.841636 kernel: audit: type=1327 audit(1768957157.797:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=160 op=LOAD Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=160 op=UNLOAD Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=159 op=UNLOAD Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.797000 audit: BPF prog-id=161 op=LOAD Jan 21 00:59:17.797000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4032 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664613263666139363938346239306536303032326136653838333332 Jan 21 00:59:17.852761 containerd[2473]: time="2026-01-21T00:59:17.852735302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lh5ph,Uid:87c14d69-ea86-497d-a362-b5f3757d3737,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088\"" Jan 21 00:59:17.854221 containerd[2473]: time="2026-01-21T00:59:17.854178780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 00:59:17.870478 containerd[2473]: time="2026-01-21T00:59:17.870436916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bnks,Uid:f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:17.905756 containerd[2473]: time="2026-01-21T00:59:17.905054493Z" level=info msg="connecting to shim 365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d" address="unix:///run/containerd/s/da18f6309dc6300d62c7b07939a94527e7cae5993830802f5d79b7445aaa97a1" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:17.923942 systemd[1]: Started cri-containerd-365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d.scope - libcontainer container 365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d. Jan 21 00:59:17.929000 audit: BPF prog-id=162 op=LOAD Jan 21 00:59:17.929000 audit: BPF prog-id=163 op=LOAD Jan 21 00:59:17.929000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.929000 audit: BPF prog-id=163 op=UNLOAD Jan 21 00:59:17.929000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.930000 audit: BPF prog-id=164 op=LOAD Jan 21 00:59:17.930000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.930000 audit: BPF prog-id=165 op=LOAD Jan 21 00:59:17.930000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.930000 audit: BPF prog-id=165 op=UNLOAD Jan 21 00:59:17.930000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.930000 audit: BPF prog-id=164 op=UNLOAD Jan 21 00:59:17.930000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.930000 audit: BPF prog-id=166 op=LOAD Jan 21 00:59:17.930000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4078 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336353331336236613232666239323030376132333132363865313664 Jan 21 00:59:17.944644 containerd[2473]: time="2026-01-21T00:59:17.944616793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bnks,Uid:f5d1dfea-7a66-4e2e-9bc2-c8411e3b4e61,Namespace:kube-system,Attempt:0,} returns sandbox id \"365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d\"" Jan 21 00:59:17.950858 containerd[2473]: time="2026-01-21T00:59:17.950822398Z" level=info msg="CreateContainer within sandbox \"365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 00:59:17.965338 containerd[2473]: time="2026-01-21T00:59:17.965314865Z" level=info msg="Container 6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:17.977100 containerd[2473]: time="2026-01-21T00:59:17.977076237Z" level=info msg="CreateContainer within sandbox \"365313b6a22fb92007a231268e16d4999b2149fa97226e428c75cc49650e737d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7\"" Jan 21 00:59:17.977637 containerd[2473]: time="2026-01-21T00:59:17.977619148Z" level=info msg="StartContainer for \"6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7\"" Jan 21 00:59:17.979134 containerd[2473]: time="2026-01-21T00:59:17.979106232Z" level=info msg="connecting to shim 6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7" address="unix:///run/containerd/s/da18f6309dc6300d62c7b07939a94527e7cae5993830802f5d79b7445aaa97a1" protocol=ttrpc version=3 Jan 21 00:59:18.001929 systemd[1]: Started cri-containerd-6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7.scope - libcontainer container 6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7. Jan 21 00:59:18.036000 audit: BPF prog-id=167 op=LOAD Jan 21 00:59:18.036000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4078 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373034663666313439353765373933646161343031653963313935 Jan 21 00:59:18.036000 audit: BPF prog-id=168 op=LOAD Jan 21 00:59:18.036000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4078 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373034663666313439353765373933646161343031653963313935 Jan 21 00:59:18.036000 audit: BPF prog-id=168 op=UNLOAD Jan 21 00:59:18.036000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373034663666313439353765373933646161343031653963313935 Jan 21 00:59:18.036000 audit: BPF prog-id=167 op=UNLOAD Jan 21 00:59:18.036000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373034663666313439353765373933646161343031653963313935 Jan 21 00:59:18.036000 audit: BPF prog-id=169 op=LOAD Jan 21 00:59:18.036000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4078 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373034663666313439353765373933646161343031653963313935 Jan 21 00:59:18.055748 containerd[2473]: time="2026-01-21T00:59:18.055712888Z" level=info msg="StartContainer for \"6b704f6f14957e793daa401e9c195c4bdcd098cc317ad70895f04c37f3e751f7\" returns successfully" Jan 21 00:59:18.144000 audit[4183]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.144000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2c8079a0 a2=0 a3=7fff2c80798c items=0 ppid=4132 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 00:59:18.147000 audit[4186]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.147000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf294ef50 a2=0 a3=7ffdf294ef3c items=0 ppid=4132 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 00:59:18.148000 audit[4188]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=4188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.148000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff12645f70 a2=0 a3=7fff12645f5c items=0 ppid=4132 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 00:59:18.150000 audit[4189]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.150000 audit[4189]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeae36d4d0 a2=0 a3=7ffeae36d4bc items=0 ppid=4132 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 00:59:18.151000 audit[4190]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.151000 audit[4190]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdafdf0200 a2=0 a3=7ffdafdf01ec items=0 ppid=4132 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 00:59:18.153000 audit[4191]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.153000 audit[4191]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9c9e95f0 a2=0 a3=7fff9c9e95dc items=0 ppid=4132 pid=4191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 00:59:18.250000 audit[4192]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.250000 audit[4192]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff98475120 a2=0 a3=7fff9847510c items=0 ppid=4132 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 00:59:18.253000 audit[4194]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.253000 audit[4194]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe66917b00 a2=0 a3=7ffe66917aec items=0 ppid=4132 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 00:59:18.256000 audit[4197]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.256000 audit[4197]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbaaae2c0 a2=0 a3=7fffbaaae2ac items=0 ppid=4132 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 00:59:18.257000 audit[4198]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.257000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf7a06800 a2=0 a3=7ffcf7a067ec items=0 ppid=4132 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 00:59:18.259000 audit[4200]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.259000 audit[4200]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4edc16b0 a2=0 a3=7ffc4edc169c items=0 ppid=4132 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 00:59:18.260000 audit[4201]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.260000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd11b08b30 a2=0 a3=7ffd11b08b1c items=0 ppid=4132 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 00:59:18.263000 audit[4203]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.263000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc6683a440 a2=0 a3=7ffc6683a42c items=0 ppid=4132 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 00:59:18.266000 audit[4206]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.266000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff54505860 a2=0 a3=7fff5450584c items=0 ppid=4132 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 00:59:18.267000 audit[4207]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.267000 audit[4207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc61fb7be0 a2=0 a3=7ffc61fb7bcc items=0 ppid=4132 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 00:59:18.269000 audit[4209]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.269000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea282c150 a2=0 a3=7ffea282c13c items=0 ppid=4132 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 00:59:18.270000 audit[4210]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.270000 audit[4210]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc9ac9ac0 a2=0 a3=7ffcc9ac9aac items=0 ppid=4132 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 00:59:18.273000 audit[4212]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.273000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed9a6cc10 a2=0 a3=7ffed9a6cbfc items=0 ppid=4132 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:59:18.276000 audit[4215]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.276000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca5a891f0 a2=0 a3=7ffca5a891dc items=0 ppid=4132 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:59:18.280000 audit[4218]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.280000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd12f04ea0 a2=0 a3=7ffd12f04e8c items=0 ppid=4132 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 00:59:18.281000 audit[4219]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.281000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde9475fd0 a2=0 a3=7ffde9475fbc items=0 ppid=4132 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 00:59:18.283000 audit[4221]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.283000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe3c18b360 a2=0 a3=7ffe3c18b34c items=0 ppid=4132 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:59:18.287000 audit[4224]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.287000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd85cd7d0 a2=0 a3=7ffdd85cd7bc items=0 ppid=4132 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:59:18.288000 audit[4225]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.288000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd1a53050 a2=0 a3=7fffd1a5303c items=0 ppid=4132 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 00:59:18.291000 audit[4227]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:59:18.291000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcdd805310 a2=0 a3=7ffcdd8052fc items=0 ppid=4132 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 00:59:18.377000 audit[4233]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:18.377000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd4605a400 a2=0 a3=7ffd4605a3ec items=0 ppid=4132 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:18.432000 audit[4233]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:18.432000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd4605a400 a2=0 a3=7ffd4605a3ec items=0 ppid=4132 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:18.434000 audit[4238]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.434000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd447e42c0 a2=0 a3=7ffd447e42ac items=0 ppid=4132 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 00:59:18.436000 audit[4240]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.436000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffca21794c0 a2=0 a3=7ffca21794ac items=0 ppid=4132 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 00:59:18.440000 audit[4243]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.440000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd767c25f0 a2=0 a3=7ffd767c25dc items=0 ppid=4132 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 00:59:18.441000 audit[4244]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.441000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc17082d40 a2=0 a3=7ffc17082d2c items=0 ppid=4132 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 00:59:18.443000 audit[4246]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.443000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd57098f0 a2=0 a3=7ffdd57098dc items=0 ppid=4132 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 00:59:18.444000 audit[4247]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.444000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeca27adc0 a2=0 a3=7ffeca27adac items=0 ppid=4132 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 00:59:18.447000 audit[4249]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.447000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd97b55aa0 a2=0 a3=7ffd97b55a8c items=0 ppid=4132 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 00:59:18.453000 audit[4252]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.453000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff54f548e0 a2=0 a3=7fff54f548cc items=0 ppid=4132 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 00:59:18.454000 audit[4253]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.454000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdae21320 a2=0 a3=7fffdae2130c items=0 ppid=4132 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 00:59:18.456000 audit[4255]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.456000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff559daf50 a2=0 a3=7fff559daf3c items=0 ppid=4132 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 00:59:18.458000 audit[4256]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.458000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc485e7580 a2=0 a3=7ffc485e756c items=0 ppid=4132 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 00:59:18.460000 audit[4258]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.460000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3aef9ff0 a2=0 a3=7fff3aef9fdc items=0 ppid=4132 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:59:18.463000 audit[4261]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.463000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefbd8f760 a2=0 a3=7ffefbd8f74c items=0 ppid=4132 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 00:59:18.466000 audit[4264]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.466000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebce034a0 a2=0 a3=7ffebce0348c items=0 ppid=4132 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 00:59:18.467000 audit[4265]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.467000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff88474330 a2=0 a3=7fff8847431c items=0 ppid=4132 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 00:59:18.470000 audit[4267]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.470000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff182e2b00 a2=0 a3=7fff182e2aec items=0 ppid=4132 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:59:18.473000 audit[4270]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.473000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff547f23a0 a2=0 a3=7fff547f238c items=0 ppid=4132 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:59:18.475000 audit[4271]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.475000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc733e80f0 a2=0 a3=7ffc733e80dc items=0 ppid=4132 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 00:59:18.477000 audit[4273]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.477000 audit[4273]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd9ec425a0 a2=0 a3=7ffd9ec4258c items=0 ppid=4132 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 00:59:18.478000 audit[4274]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.478000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcffc78a10 a2=0 a3=7ffcffc789fc items=0 ppid=4132 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 00:59:18.480000 audit[4276]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.480000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7f351970 a2=0 a3=7ffe7f35195c items=0 ppid=4132 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:59:18.484000 audit[4279]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:59:18.484000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffdb60c400 a2=0 a3=7fffdb60c3ec items=0 ppid=4132 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:59:18.487000 audit[4281]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 00:59:18.487000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd4cd25330 a2=0 a3=7ffd4cd2531c items=0 ppid=4132 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.487000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:18.487000 audit[4281]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 00:59:18.487000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd4cd25330 a2=0 a3=7ffd4cd2531c items=0 ppid=4132 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:18.487000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:19.296351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518389108.mount: Deactivated successfully. Jan 21 00:59:19.718085 containerd[2473]: time="2026-01-21T00:59:19.718039447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:19.720740 containerd[2473]: time="2026-01-21T00:59:19.720711214Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 21 00:59:19.723357 containerd[2473]: time="2026-01-21T00:59:19.722617768Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:19.734579 containerd[2473]: time="2026-01-21T00:59:19.734551713Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:19.734996 containerd[2473]: time="2026-01-21T00:59:19.734974923Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.880766216s" Jan 21 00:59:19.735073 containerd[2473]: time="2026-01-21T00:59:19.735061519Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 21 00:59:19.741989 containerd[2473]: time="2026-01-21T00:59:19.741962896Z" level=info msg="CreateContainer within sandbox \"fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 00:59:19.757816 containerd[2473]: time="2026-01-21T00:59:19.757251871Z" level=info msg="Container 054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:19.776484 containerd[2473]: time="2026-01-21T00:59:19.776460151Z" level=info msg="CreateContainer within sandbox \"fda2cfa96984b90e60022a6e88332d879d30aea6bffea2cc4bc8e896aedbb088\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669\"" Jan 21 00:59:19.777709 containerd[2473]: time="2026-01-21T00:59:19.776906070Z" level=info msg="StartContainer for \"054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669\"" Jan 21 00:59:19.777709 containerd[2473]: time="2026-01-21T00:59:19.777634251Z" level=info msg="connecting to shim 054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669" address="unix:///run/containerd/s/dea3505578a4867f4dd3bbc959f7f04732f65b08f81c342e74c6e8d3abca110f" protocol=ttrpc version=3 Jan 21 00:59:19.798946 systemd[1]: Started cri-containerd-054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669.scope - libcontainer container 054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669. Jan 21 00:59:19.808000 audit: BPF prog-id=170 op=LOAD Jan 21 00:59:19.809000 audit: BPF prog-id=171 op=LOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=171 op=UNLOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=172 op=LOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=173 op=LOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=173 op=UNLOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=172 op=UNLOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.809000 audit: BPF prog-id=174 op=LOAD Jan 21 00:59:19.809000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4032 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:19.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343231396362346137653431613538333963343134303535393036 Jan 21 00:59:19.826750 containerd[2473]: time="2026-01-21T00:59:19.826728331Z" level=info msg="StartContainer for \"054219cb4a7e41a5839c414055906f986e65bc2538bd46410c9f3eccd7f68669\" returns successfully" Jan 21 00:59:19.927990 kubelet[3975]: I0121 00:59:19.927327 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bnks" podStartSLOduration=3.927310106 podStartE2EDuration="3.927310106s" podCreationTimestamp="2026-01-21 00:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:18.936245054 +0000 UTC m=+7.148973615" watchObservedRunningTime="2026-01-21 00:59:19.927310106 +0000 UTC m=+8.140038658" Jan 21 00:59:19.927990 kubelet[3975]: I0121 00:59:19.927416 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lh5ph" podStartSLOduration=1.045617778 podStartE2EDuration="2.927409557s" podCreationTimestamp="2026-01-21 00:59:17 +0000 UTC" firstStartedPulling="2026-01-21 00:59:17.853756228 +0000 UTC m=+6.066484772" lastFinishedPulling="2026-01-21 00:59:19.735548008 +0000 UTC m=+7.948276551" observedRunningTime="2026-01-21 00:59:19.927250792 +0000 UTC m=+8.139979363" watchObservedRunningTime="2026-01-21 00:59:19.927409557 +0000 UTC m=+8.140138107" Jan 21 00:59:25.406219 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 21 00:59:25.406329 kernel: audit: type=1106 audit(1768957165.397:544): pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:59:25.397000 audit[2959]: USER_END pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:59:25.398447 sudo[2959]: pam_unix(sudo:session): session closed for user root Jan 21 00:59:25.397000 audit[2959]: CRED_DISP pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:59:25.413790 kernel: audit: type=1104 audit(1768957165.397:545): pid=2959 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:59:25.516795 sshd[2945]: Connection closed by 10.200.16.10 port 56712 Jan 21 00:59:25.517457 sshd-session[2938]: pam_unix(sshd:session): session closed for user core Jan 21 00:59:25.518000 audit[2938]: USER_END pid=2938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:59:25.522364 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:56712.service: Deactivated successfully. Jan 21 00:59:25.527123 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 00:59:25.527933 kernel: audit: type=1106 audit(1768957165.518:546): pid=2938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:59:25.528131 systemd[1]: session-10.scope: Consumed 3.292s CPU time, 232M memory peak. Jan 21 00:59:25.518000 audit[2938]: CRED_DISP pid=2938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:59:25.532272 systemd-logind[2449]: Session 10 logged out. Waiting for processes to exit. Jan 21 00:59:25.534202 systemd-logind[2449]: Removed session 10. Jan 21 00:59:25.535799 kernel: audit: type=1104 audit(1768957165.518:547): pid=2938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 00:59:25.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.39:22-10.200.16.10:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:25.544800 kernel: audit: type=1131 audit(1768957165.518:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.39:22-10.200.16.10:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:59:26.901000 audit[4372]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.908818 kernel: audit: type=1325 audit(1768957166.901:549): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.901000 audit[4372]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffdf825f70 a2=0 a3=7fffdf825f5c items=0 ppid=4132 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.918807 kernel: audit: type=1300 audit(1768957166.901:549): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffdf825f70 a2=0 a3=7fffdf825f5c items=0 ppid=4132 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:26.924869 kernel: audit: type=1327 audit(1768957166.901:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:26.918000 audit[4372]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.929790 kernel: audit: type=1325 audit(1768957166.918:550): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.918000 audit[4372]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdf825f70 a2=0 a3=0 items=0 ppid=4132 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.939123 kernel: audit: type=1300 audit(1768957166.918:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdf825f70 a2=0 a3=0 items=0 ppid=4132 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:26.958000 audit[4374]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.958000 audit[4374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed26e8e40 a2=0 a3=7ffed26e8e2c items=0 ppid=4132 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:26.962000 audit[4374]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:26.962000 audit[4374]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed26e8e40 a2=0 a3=0 items=0 ppid=4132 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:26.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:29.474000 audit[4376]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:29.474000 audit[4376]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff3bf6ab60 a2=0 a3=7fff3bf6ab4c items=0 ppid=4132 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:29.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:29.479000 audit[4376]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:29.479000 audit[4376]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3bf6ab60 a2=0 a3=0 items=0 ppid=4132 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:29.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:29.500000 audit[4378]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:29.500000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe742379d0 a2=0 a3=7ffe742379bc items=0 ppid=4132 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:29.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:29.505000 audit[4378]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:29.505000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe742379d0 a2=0 a3=0 items=0 ppid=4132 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:29.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:30.524864 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 21 00:59:30.524979 kernel: audit: type=1325 audit(1768957170.517:557): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:30.517000 audit[4381]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:30.533459 kernel: audit: type=1300 audit(1768957170.517:557): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedb407d50 a2=0 a3=7ffedb407d3c items=0 ppid=4132 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:30.517000 audit[4381]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedb407d50 a2=0 a3=7ffedb407d3c items=0 ppid=4132 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:30.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:30.543030 kernel: audit: type=1327 audit(1768957170.517:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:30.543098 kernel: audit: type=1325 audit(1768957170.533:558): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:30.533000 audit[4381]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:30.551224 kernel: audit: type=1300 audit(1768957170.533:558): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedb407d50 a2=0 a3=0 items=0 ppid=4132 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:30.533000 audit[4381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedb407d50 a2=0 a3=0 items=0 ppid=4132 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:30.555834 kernel: audit: type=1327 audit(1768957170.533:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:30.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:31.208606 systemd[1]: Created slice kubepods-besteffort-podb5ba597f_8ab9_4bd4_8dbb_063ce91ba552.slice - libcontainer container kubepods-besteffort-podb5ba597f_8ab9_4bd4_8dbb_063ce91ba552.slice. Jan 21 00:59:31.309893 kubelet[3975]: I0121 00:59:31.309750 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5ba597f-8ab9-4bd4-8dbb-063ce91ba552-tigera-ca-bundle\") pod \"calico-typha-55d86bb5dc-nkd9b\" (UID: \"b5ba597f-8ab9-4bd4-8dbb-063ce91ba552\") " pod="calico-system/calico-typha-55d86bb5dc-nkd9b" Jan 21 00:59:31.309893 kubelet[3975]: I0121 00:59:31.309827 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5ba597f-8ab9-4bd4-8dbb-063ce91ba552-typha-certs\") pod \"calico-typha-55d86bb5dc-nkd9b\" (UID: \"b5ba597f-8ab9-4bd4-8dbb-063ce91ba552\") " pod="calico-system/calico-typha-55d86bb5dc-nkd9b" Jan 21 00:59:31.309893 kubelet[3975]: I0121 00:59:31.309849 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4m6\" (UniqueName: \"kubernetes.io/projected/b5ba597f-8ab9-4bd4-8dbb-063ce91ba552-kube-api-access-rp4m6\") pod \"calico-typha-55d86bb5dc-nkd9b\" (UID: \"b5ba597f-8ab9-4bd4-8dbb-063ce91ba552\") " pod="calico-system/calico-typha-55d86bb5dc-nkd9b" Jan 21 00:59:31.395022 systemd[1]: Created slice kubepods-besteffort-podc22a97cf_69e7_484d_95ab_cd324714c260.slice - libcontainer container kubepods-besteffort-podc22a97cf_69e7_484d_95ab_cd324714c260.slice. Jan 21 00:59:31.410383 kubelet[3975]: I0121 00:59:31.410258 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-cni-log-dir\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.410383 kubelet[3975]: I0121 00:59:31.410346 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-var-lib-calico\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.410383 kubelet[3975]: I0121 00:59:31.410363 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-xtables-lock\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.411803 kubelet[3975]: I0121 00:59:31.410845 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-lib-modules\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.411803 kubelet[3975]: I0121 00:59:31.411755 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-cni-net-dir\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412397 kubelet[3975]: I0121 00:59:31.411937 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c22a97cf-69e7-484d-95ab-cd324714c260-tigera-ca-bundle\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412397 kubelet[3975]: I0121 00:59:31.411983 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6s9\" (UniqueName: \"kubernetes.io/projected/c22a97cf-69e7-484d-95ab-cd324714c260-kube-api-access-qx6s9\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412397 kubelet[3975]: I0121 00:59:31.412018 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-flexvol-driver-host\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412397 kubelet[3975]: I0121 00:59:31.412036 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-var-run-calico\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412397 kubelet[3975]: I0121 00:59:31.412056 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-cni-bin-dir\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412551 kubelet[3975]: I0121 00:59:31.412085 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c22a97cf-69e7-484d-95ab-cd324714c260-node-certs\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.412551 kubelet[3975]: I0121 00:59:31.412101 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c22a97cf-69e7-484d-95ab-cd324714c260-policysync\") pod \"calico-node-vd4m6\" (UID: \"c22a97cf-69e7-484d-95ab-cd324714c260\") " pod="calico-system/calico-node-vd4m6" Jan 21 00:59:31.513076 containerd[2473]: time="2026-01-21T00:59:31.512990271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d86bb5dc-nkd9b,Uid:b5ba597f-8ab9-4bd4-8dbb-063ce91ba552,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:31.520351 kubelet[3975]: E0121 00:59:31.520299 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.520351 kubelet[3975]: W0121 00:59:31.520317 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.520351 kubelet[3975]: E0121 00:59:31.520334 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.525547 kubelet[3975]: E0121 00:59:31.524280 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.525547 kubelet[3975]: W0121 00:59:31.524296 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.525547 kubelet[3975]: E0121 00:59:31.524310 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.552000 audit[4389]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:31.565060 kernel: audit: type=1325 audit(1768957171.552:559): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:31.552000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce7d93b80 a2=0 a3=7ffce7d93b6c items=0 ppid=4132 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.573920 kernel: audit: type=1300 audit(1768957171.552:559): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce7d93b80 a2=0 a3=7ffce7d93b6c items=0 ppid=4132 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:31.583815 kernel: audit: type=1327 audit(1768957171.552:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:31.583855 kernel: audit: type=1325 audit(1768957171.564:560): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:31.564000 audit[4389]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:31.585739 kubelet[3975]: E0121 00:59:31.585548 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:31.564000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce7d93b80 a2=0 a3=0 items=0 ppid=4132 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:31.596698 kubelet[3975]: E0121 00:59:31.596677 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.596698 kubelet[3975]: W0121 00:59:31.596698 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.597015 kubelet[3975]: E0121 00:59:31.596713 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.597196 kubelet[3975]: E0121 00:59:31.597180 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.597280 kubelet[3975]: W0121 00:59:31.597198 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.597280 kubelet[3975]: E0121 00:59:31.597211 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.597510 kubelet[3975]: E0121 00:59:31.597494 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.597645 kubelet[3975]: W0121 00:59:31.597511 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.597645 kubelet[3975]: E0121 00:59:31.597524 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.598182 kubelet[3975]: E0121 00:59:31.598133 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.598182 kubelet[3975]: W0121 00:59:31.598147 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.598182 kubelet[3975]: E0121 00:59:31.598159 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.598790 kubelet[3975]: E0121 00:59:31.598753 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.598790 kubelet[3975]: W0121 00:59:31.598783 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.598878 kubelet[3975]: E0121 00:59:31.598800 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.598992 kubelet[3975]: E0121 00:59:31.598904 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.598992 kubelet[3975]: W0121 00:59:31.598911 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.598992 kubelet[3975]: E0121 00:59:31.598918 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.599153 kubelet[3975]: E0121 00:59:31.599038 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.599153 kubelet[3975]: W0121 00:59:31.599043 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.599153 kubelet[3975]: E0121 00:59:31.599050 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.599715 kubelet[3975]: E0121 00:59:31.599691 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.599715 kubelet[3975]: W0121 00:59:31.599710 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.599801 kubelet[3975]: E0121 00:59:31.599721 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.599981 kubelet[3975]: E0121 00:59:31.599857 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.599981 kubelet[3975]: W0121 00:59:31.599866 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.599981 kubelet[3975]: E0121 00:59:31.599873 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.599981 kubelet[3975]: E0121 00:59:31.599963 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.599981 kubelet[3975]: W0121 00:59:31.599968 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.599981 kubelet[3975]: E0121 00:59:31.599975 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.600302 kubelet[3975]: E0121 00:59:31.600164 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.600302 kubelet[3975]: W0121 00:59:31.600171 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.600302 kubelet[3975]: E0121 00:59:31.600178 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.600466 kubelet[3975]: E0121 00:59:31.600445 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.600466 kubelet[3975]: W0121 00:59:31.600454 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.600466 kubelet[3975]: E0121 00:59:31.600463 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.600910 kubelet[3975]: E0121 00:59:31.600891 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.601035 kubelet[3975]: W0121 00:59:31.600927 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.601035 kubelet[3975]: E0121 00:59:31.600939 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.602022 kubelet[3975]: E0121 00:59:31.601545 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.602022 kubelet[3975]: W0121 00:59:31.601561 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.602022 kubelet[3975]: E0121 00:59:31.601574 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.602022 kubelet[3975]: E0121 00:59:31.601989 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.602022 kubelet[3975]: W0121 00:59:31.602000 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.602022 kubelet[3975]: E0121 00:59:31.602012 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.602204 kubelet[3975]: E0121 00:59:31.602138 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.602204 kubelet[3975]: W0121 00:59:31.602143 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.602204 kubelet[3975]: E0121 00:59:31.602151 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.602416 kubelet[3975]: E0121 00:59:31.602403 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.602416 kubelet[3975]: W0121 00:59:31.602415 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.602466 kubelet[3975]: E0121 00:59:31.602424 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.602499 containerd[2473]: time="2026-01-21T00:59:31.602467023Z" level=info msg="connecting to shim 19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35" address="unix:///run/containerd/s/3d0e9f0bf2bbf3e8bdc3d00946eb6eba10ee697e4ba502dfc9eedd5b11a81d81" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:31.602708 kubelet[3975]: E0121 00:59:31.602697 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.602746 kubelet[3975]: W0121 00:59:31.602707 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.602746 kubelet[3975]: E0121 00:59:31.602717 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.603289 kubelet[3975]: E0121 00:59:31.603269 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.603289 kubelet[3975]: W0121 00:59:31.603288 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.603367 kubelet[3975]: E0121 00:59:31.603300 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.603436 kubelet[3975]: E0121 00:59:31.603427 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.603462 kubelet[3975]: W0121 00:59:31.603438 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.603462 kubelet[3975]: E0121 00:59:31.603446 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.613319 kubelet[3975]: E0121 00:59:31.612816 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.613319 kubelet[3975]: W0121 00:59:31.612837 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.613319 kubelet[3975]: E0121 00:59:31.612875 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.613319 kubelet[3975]: I0121 00:59:31.612908 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce3bc266-4945-4335-b09f-5dc1a5736d5d-registration-dir\") pod \"csi-node-driver-6b85f\" (UID: \"ce3bc266-4945-4335-b09f-5dc1a5736d5d\") " pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:31.613319 kubelet[3975]: E0121 00:59:31.613256 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.613319 kubelet[3975]: W0121 00:59:31.613272 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.613319 kubelet[3975]: E0121 00:59:31.613307 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.613538 kubelet[3975]: I0121 00:59:31.613331 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce3bc266-4945-4335-b09f-5dc1a5736d5d-kubelet-dir\") pod \"csi-node-driver-6b85f\" (UID: \"ce3bc266-4945-4335-b09f-5dc1a5736d5d\") " pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:31.613538 kubelet[3975]: E0121 00:59:31.613498 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.613538 kubelet[3975]: W0121 00:59:31.613504 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.613538 kubelet[3975]: E0121 00:59:31.613512 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.613630 kubelet[3975]: I0121 00:59:31.613543 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwbm\" (UniqueName: \"kubernetes.io/projected/ce3bc266-4945-4335-b09f-5dc1a5736d5d-kube-api-access-hbwbm\") pod \"csi-node-driver-6b85f\" (UID: \"ce3bc266-4945-4335-b09f-5dc1a5736d5d\") " pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:31.613903 kubelet[3975]: E0121 00:59:31.613891 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614067 kubelet[3975]: W0121 00:59:31.613945 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614067 kubelet[3975]: E0121 00:59:31.613957 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.614067 kubelet[3975]: I0121 00:59:31.613979 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce3bc266-4945-4335-b09f-5dc1a5736d5d-socket-dir\") pod \"csi-node-driver-6b85f\" (UID: \"ce3bc266-4945-4335-b09f-5dc1a5736d5d\") " pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:31.614185 kubelet[3975]: E0121 00:59:31.614179 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614219 kubelet[3975]: W0121 00:59:31.614213 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614267 kubelet[3975]: E0121 00:59:31.614260 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.614313 kubelet[3975]: I0121 00:59:31.614305 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce3bc266-4945-4335-b09f-5dc1a5736d5d-varrun\") pod \"csi-node-driver-6b85f\" (UID: \"ce3bc266-4945-4335-b09f-5dc1a5736d5d\") " pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:31.614442 kubelet[3975]: E0121 00:59:31.614428 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614442 kubelet[3975]: W0121 00:59:31.614439 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614493 kubelet[3975]: E0121 00:59:31.614448 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.614791 kubelet[3975]: E0121 00:59:31.614553 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614791 kubelet[3975]: W0121 00:59:31.614559 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614791 kubelet[3975]: E0121 00:59:31.614565 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.614791 kubelet[3975]: E0121 00:59:31.614696 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614791 kubelet[3975]: W0121 00:59:31.614700 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614791 kubelet[3975]: E0121 00:59:31.614707 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.614958 kubelet[3975]: E0121 00:59:31.614855 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.614958 kubelet[3975]: W0121 00:59:31.614862 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.614958 kubelet[3975]: E0121 00:59:31.614872 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615031 kubelet[3975]: E0121 00:59:31.615024 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615031 kubelet[3975]: W0121 00:59:31.615029 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615074 kubelet[3975]: E0121 00:59:31.615035 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615193 kubelet[3975]: E0121 00:59:31.615169 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615193 kubelet[3975]: W0121 00:59:31.615175 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615193 kubelet[3975]: E0121 00:59:31.615181 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615360 kubelet[3975]: E0121 00:59:31.615293 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615360 kubelet[3975]: W0121 00:59:31.615304 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615360 kubelet[3975]: E0121 00:59:31.615309 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615443 kubelet[3975]: E0121 00:59:31.615416 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615443 kubelet[3975]: W0121 00:59:31.615420 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615443 kubelet[3975]: E0121 00:59:31.615426 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615607 kubelet[3975]: E0121 00:59:31.615537 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615607 kubelet[3975]: W0121 00:59:31.615542 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615607 kubelet[3975]: E0121 00:59:31.615548 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.615689 kubelet[3975]: E0121 00:59:31.615683 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.615712 kubelet[3975]: W0121 00:59:31.615689 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.615712 kubelet[3975]: E0121 00:59:31.615706 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.629936 systemd[1]: Started cri-containerd-19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35.scope - libcontainer container 19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35. Jan 21 00:59:31.642000 audit: BPF prog-id=175 op=LOAD Jan 21 00:59:31.643000 audit: BPF prog-id=176 op=LOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=176 op=UNLOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=177 op=LOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=178 op=LOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=178 op=UNLOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=177 op=UNLOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.643000 audit: BPF prog-id=179 op=LOAD Jan 21 00:59:31.643000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4414 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139623630353364323466646262373564376239636438386336666336 Jan 21 00:59:31.693704 containerd[2473]: time="2026-01-21T00:59:31.693666618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d86bb5dc-nkd9b,Uid:b5ba597f-8ab9-4bd4-8dbb-063ce91ba552,Namespace:calico-system,Attempt:0,} returns sandbox id \"19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35\"" Jan 21 00:59:31.695189 containerd[2473]: time="2026-01-21T00:59:31.695168340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 00:59:31.700426 containerd[2473]: time="2026-01-21T00:59:31.700400606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vd4m6,Uid:c22a97cf-69e7-484d-95ab-cd324714c260,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:31.714934 kubelet[3975]: E0121 00:59:31.714915 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.714934 kubelet[3975]: W0121 00:59:31.714930 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.715045 kubelet[3975]: E0121 00:59:31.714950 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715126 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.715893 kubelet[3975]: W0121 00:59:31.715132 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715137 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715274 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.715893 kubelet[3975]: W0121 00:59:31.715278 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715284 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715419 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.715893 kubelet[3975]: W0121 00:59:31.715425 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715430 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.715893 kubelet[3975]: E0121 00:59:31.715573 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716200 kubelet[3975]: W0121 00:59:31.715581 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.715598 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.715697 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716200 kubelet[3975]: W0121 00:59:31.715702 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.715708 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.715854 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716200 kubelet[3975]: W0121 00:59:31.715860 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.715867 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716200 kubelet[3975]: E0121 00:59:31.716102 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716200 kubelet[3975]: W0121 00:59:31.716108 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716516 kubelet[3975]: E0121 00:59:31.716115 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716516 kubelet[3975]: E0121 00:59:31.716274 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716516 kubelet[3975]: W0121 00:59:31.716280 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716516 kubelet[3975]: E0121 00:59:31.716287 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716516 kubelet[3975]: E0121 00:59:31.716417 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716516 kubelet[3975]: W0121 00:59:31.716422 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716516 kubelet[3975]: E0121 00:59:31.716429 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716695 kubelet[3975]: E0121 00:59:31.716536 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716695 kubelet[3975]: W0121 00:59:31.716542 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716695 kubelet[3975]: E0121 00:59:31.716548 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716695 kubelet[3975]: E0121 00:59:31.716683 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716695 kubelet[3975]: W0121 00:59:31.716688 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716695 kubelet[3975]: E0121 00:59:31.716694 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.716948 kubelet[3975]: E0121 00:59:31.716905 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.716948 kubelet[3975]: W0121 00:59:31.716911 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.716948 kubelet[3975]: E0121 00:59:31.716919 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717135 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718502 kubelet[3975]: W0121 00:59:31.717143 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717150 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717312 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718502 kubelet[3975]: W0121 00:59:31.717317 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717323 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717614 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718502 kubelet[3975]: W0121 00:59:31.717621 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717629 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718502 kubelet[3975]: E0121 00:59:31.717749 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718849 kubelet[3975]: W0121 00:59:31.717754 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.717760 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.718043 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718849 kubelet[3975]: W0121 00:59:31.718053 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.718062 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.718191 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718849 kubelet[3975]: W0121 00:59:31.718196 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.718203 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.718849 kubelet[3975]: E0121 00:59:31.718344 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.718849 kubelet[3975]: W0121 00:59:31.718350 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718358 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718505 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.719098 kubelet[3975]: W0121 00:59:31.718511 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718518 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718670 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.719098 kubelet[3975]: W0121 00:59:31.718676 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718683 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718876 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.719098 kubelet[3975]: W0121 00:59:31.718882 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719098 kubelet[3975]: E0121 00:59:31.718890 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.719272 kubelet[3975]: E0121 00:59:31.719013 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.719272 kubelet[3975]: W0121 00:59:31.719019 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719272 kubelet[3975]: E0121 00:59:31.719026 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.719272 kubelet[3975]: E0121 00:59:31.719153 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.719272 kubelet[3975]: W0121 00:59:31.719158 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.719272 kubelet[3975]: E0121 00:59:31.719165 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.727487 kubelet[3975]: E0121 00:59:31.727449 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:31.727487 kubelet[3975]: W0121 00:59:31.727462 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:31.727487 kubelet[3975]: E0121 00:59:31.727474 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:31.739747 containerd[2473]: time="2026-01-21T00:59:31.739532441Z" level=info msg="connecting to shim 5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428" address="unix:///run/containerd/s/43f22762c12349fb4ff23dc8c582ac9f133092599894ae4ba6b6435ce42eb2a1" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:31.759962 systemd[1]: Started cri-containerd-5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428.scope - libcontainer container 5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428. Jan 21 00:59:31.766000 audit: BPF prog-id=180 op=LOAD Jan 21 00:59:31.767000 audit: BPF prog-id=181 op=LOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=181 op=UNLOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=182 op=LOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=183 op=LOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=183 op=UNLOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=182 op=UNLOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.767000 audit: BPF prog-id=184 op=LOAD Jan 21 00:59:31.767000 audit[4528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4517 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:31.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566646431613866396561333865663435363764376566623930383536 Jan 21 00:59:31.783988 containerd[2473]: time="2026-01-21T00:59:31.783961529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vd4m6,Uid:c22a97cf-69e7-484d-95ab-cd324714c260,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\"" Jan 21 00:59:32.880317 kubelet[3975]: E0121 00:59:32.880230 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:32.972095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809191449.mount: Deactivated successfully. Jan 21 00:59:34.007511 containerd[2473]: time="2026-01-21T00:59:34.007470076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:34.010395 containerd[2473]: time="2026-01-21T00:59:34.010315548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 21 00:59:34.013799 containerd[2473]: time="2026-01-21T00:59:34.013751849Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:34.017290 containerd[2473]: time="2026-01-21T00:59:34.017141665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:34.017599 containerd[2473]: time="2026-01-21T00:59:34.017572568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.322232578s" Jan 21 00:59:34.017651 containerd[2473]: time="2026-01-21T00:59:34.017605131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 21 00:59:34.018652 containerd[2473]: time="2026-01-21T00:59:34.018625970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 00:59:34.034697 containerd[2473]: time="2026-01-21T00:59:34.034671443Z" level=info msg="CreateContainer within sandbox \"19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 00:59:34.049806 containerd[2473]: time="2026-01-21T00:59:34.047455823Z" level=info msg="Container 90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:34.069255 containerd[2473]: time="2026-01-21T00:59:34.069232133Z" level=info msg="CreateContainer within sandbox \"19b6053d24fdbb75d7b9cd88c6fc63de78040528273f1ad431214ca6fd84eb35\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4\"" Jan 21 00:59:34.069743 containerd[2473]: time="2026-01-21T00:59:34.069721101Z" level=info msg="StartContainer for \"90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4\"" Jan 21 00:59:34.070740 containerd[2473]: time="2026-01-21T00:59:34.070711684Z" level=info msg="connecting to shim 90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4" address="unix:///run/containerd/s/3d0e9f0bf2bbf3e8bdc3d00946eb6eba10ee697e4ba502dfc9eedd5b11a81d81" protocol=ttrpc version=3 Jan 21 00:59:34.091532 systemd[1]: Started cri-containerd-90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4.scope - libcontainer container 90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4. Jan 21 00:59:34.108000 audit: BPF prog-id=185 op=LOAD Jan 21 00:59:34.108000 audit: BPF prog-id=186 op=LOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=186 op=UNLOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=187 op=LOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=188 op=LOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=188 op=UNLOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=187 op=UNLOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.108000 audit: BPF prog-id=189 op=LOAD Jan 21 00:59:34.108000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4414 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:34.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623932666465643665613437303365343366613962633161373863 Jan 21 00:59:34.143897 containerd[2473]: time="2026-01-21T00:59:34.143870144Z" level=info msg="StartContainer for \"90b92fded6ea4703e43fa9bc1a78c8adbee5991e1510267d7445378e930520e4\" returns successfully" Jan 21 00:59:34.880051 kubelet[3975]: E0121 00:59:34.880012 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:35.026632 kubelet[3975]: E0121 00:59:35.026599 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.026632 kubelet[3975]: W0121 00:59:35.026618 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.026632 kubelet[3975]: E0121 00:59:35.026637 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.026849 kubelet[3975]: E0121 00:59:35.026754 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.026849 kubelet[3975]: W0121 00:59:35.026760 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.026849 kubelet[3975]: E0121 00:59:35.026809 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.026954 kubelet[3975]: E0121 00:59:35.026929 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.026954 kubelet[3975]: W0121 00:59:35.026935 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.026954 kubelet[3975]: E0121 00:59:35.026943 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027160 kubelet[3975]: E0121 00:59:35.027136 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027160 kubelet[3975]: W0121 00:59:35.027157 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027209 kubelet[3975]: E0121 00:59:35.027166 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027325 kubelet[3975]: E0121 00:59:35.027302 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027325 kubelet[3975]: W0121 00:59:35.027323 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027382 kubelet[3975]: E0121 00:59:35.027344 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027461 kubelet[3975]: E0121 00:59:35.027447 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027461 kubelet[3975]: W0121 00:59:35.027458 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027542 kubelet[3975]: E0121 00:59:35.027464 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027586 kubelet[3975]: E0121 00:59:35.027569 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027586 kubelet[3975]: W0121 00:59:35.027577 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027586 kubelet[3975]: E0121 00:59:35.027583 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027729 kubelet[3975]: E0121 00:59:35.027714 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027729 kubelet[3975]: W0121 00:59:35.027725 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027802 kubelet[3975]: E0121 00:59:35.027735 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.027893 kubelet[3975]: E0121 00:59:35.027882 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.027893 kubelet[3975]: W0121 00:59:35.027891 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.027950 kubelet[3975]: E0121 00:59:35.027904 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028016 kubelet[3975]: E0121 00:59:35.028002 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028016 kubelet[3975]: W0121 00:59:35.028012 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028063 kubelet[3975]: E0121 00:59:35.028019 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028138 kubelet[3975]: E0121 00:59:35.028123 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028138 kubelet[3975]: W0121 00:59:35.028130 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028196 kubelet[3975]: E0121 00:59:35.028138 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028245 kubelet[3975]: E0121 00:59:35.028232 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028245 kubelet[3975]: W0121 00:59:35.028238 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028245 kubelet[3975]: E0121 00:59:35.028244 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028352 kubelet[3975]: E0121 00:59:35.028341 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028352 kubelet[3975]: W0121 00:59:35.028348 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028424 kubelet[3975]: E0121 00:59:35.028353 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028467 kubelet[3975]: E0121 00:59:35.028441 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028467 kubelet[3975]: W0121 00:59:35.028445 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028467 kubelet[3975]: E0121 00:59:35.028452 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.028568 kubelet[3975]: E0121 00:59:35.028558 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.028568 kubelet[3975]: W0121 00:59:35.028566 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.028642 kubelet[3975]: E0121 00:59:35.028573 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.039956 kubelet[3975]: E0121 00:59:35.039904 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.039956 kubelet[3975]: W0121 00:59:35.039920 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.039956 kubelet[3975]: E0121 00:59:35.039935 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.040406 kubelet[3975]: E0121 00:59:35.040361 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.040406 kubelet[3975]: W0121 00:59:35.040384 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.040406 kubelet[3975]: E0121 00:59:35.040396 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.040844 kubelet[3975]: E0121 00:59:35.040810 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.040844 kubelet[3975]: W0121 00:59:35.040822 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.040844 kubelet[3975]: E0121 00:59:35.040833 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.041217 kubelet[3975]: E0121 00:59:35.041187 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.041217 kubelet[3975]: W0121 00:59:35.041197 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.041217 kubelet[3975]: E0121 00:59:35.041207 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.041487 kubelet[3975]: E0121 00:59:35.041481 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.041556 kubelet[3975]: W0121 00:59:35.041524 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.041556 kubelet[3975]: E0121 00:59:35.041535 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.041817 kubelet[3975]: E0121 00:59:35.041786 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.041817 kubelet[3975]: W0121 00:59:35.041799 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.041817 kubelet[3975]: E0121 00:59:35.041808 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.042074 kubelet[3975]: E0121 00:59:35.042068 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.042173 kubelet[3975]: W0121 00:59:35.042120 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.042173 kubelet[3975]: E0121 00:59:35.042136 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.042393 kubelet[3975]: E0121 00:59:35.042355 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.042393 kubelet[3975]: W0121 00:59:35.042370 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.042393 kubelet[3975]: E0121 00:59:35.042379 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.042685 kubelet[3975]: E0121 00:59:35.042660 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.042685 kubelet[3975]: W0121 00:59:35.042669 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.042685 kubelet[3975]: E0121 00:59:35.042677 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.043089 kubelet[3975]: E0121 00:59:35.043070 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.043216 kubelet[3975]: W0121 00:59:35.043172 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.043216 kubelet[3975]: E0121 00:59:35.043186 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.043648 kubelet[3975]: E0121 00:59:35.043605 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.043648 kubelet[3975]: W0121 00:59:35.043619 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.043648 kubelet[3975]: E0121 00:59:35.043631 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.044228 kubelet[3975]: E0121 00:59:35.043820 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.044228 kubelet[3975]: W0121 00:59:35.043826 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.044228 kubelet[3975]: E0121 00:59:35.043834 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.044228 kubelet[3975]: E0121 00:59:35.044096 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.044228 kubelet[3975]: W0121 00:59:35.044103 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.044228 kubelet[3975]: E0121 00:59:35.044112 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.044454 kubelet[3975]: E0121 00:59:35.044440 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.044480 kubelet[3975]: W0121 00:59:35.044453 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.044480 kubelet[3975]: E0121 00:59:35.044465 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.044599 kubelet[3975]: E0121 00:59:35.044590 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.044628 kubelet[3975]: W0121 00:59:35.044599 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.044628 kubelet[3975]: E0121 00:59:35.044607 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.044829 kubelet[3975]: E0121 00:59:35.044747 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.044829 kubelet[3975]: W0121 00:59:35.044755 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.044829 kubelet[3975]: E0121 00:59:35.044763 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.045434 kubelet[3975]: E0121 00:59:35.045415 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.045434 kubelet[3975]: W0121 00:59:35.045432 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.045517 kubelet[3975]: E0121 00:59:35.045445 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.045639 kubelet[3975]: E0121 00:59:35.045586 3975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:59:35.045639 kubelet[3975]: W0121 00:59:35.045591 3975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:59:35.045639 kubelet[3975]: E0121 00:59:35.045599 3975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:59:35.323750 containerd[2473]: time="2026-01-21T00:59:35.323104259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:35.325694 containerd[2473]: time="2026-01-21T00:59:35.325570357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:35.328696 containerd[2473]: time="2026-01-21T00:59:35.328671236Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:35.332372 containerd[2473]: time="2026-01-21T00:59:35.331784742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:35.332372 containerd[2473]: time="2026-01-21T00:59:35.332275146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.313535696s" Jan 21 00:59:35.332372 containerd[2473]: time="2026-01-21T00:59:35.332302753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 21 00:59:35.340157 containerd[2473]: time="2026-01-21T00:59:35.340129237Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 00:59:35.357025 containerd[2473]: time="2026-01-21T00:59:35.356979381Z" level=info msg="Container 2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:35.363182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879381697.mount: Deactivated successfully. Jan 21 00:59:35.372153 containerd[2473]: time="2026-01-21T00:59:35.372126466Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24\"" Jan 21 00:59:35.372535 containerd[2473]: time="2026-01-21T00:59:35.372519572Z" level=info msg="StartContainer for \"2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24\"" Jan 21 00:59:35.374123 containerd[2473]: time="2026-01-21T00:59:35.374098480Z" level=info msg="connecting to shim 2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24" address="unix:///run/containerd/s/43f22762c12349fb4ff23dc8c582ac9f133092599894ae4ba6b6435ce42eb2a1" protocol=ttrpc version=3 Jan 21 00:59:35.399936 systemd[1]: Started cri-containerd-2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24.scope - libcontainer container 2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24. Jan 21 00:59:35.435000 audit: BPF prog-id=190 op=LOAD Jan 21 00:59:35.435000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4517 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616438643332636566353539643664323231613463376130626139 Jan 21 00:59:35.435000 audit: BPF prog-id=191 op=LOAD Jan 21 00:59:35.435000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4517 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616438643332636566353539643664323231613463376130626139 Jan 21 00:59:35.435000 audit: BPF prog-id=191 op=UNLOAD Jan 21 00:59:35.435000 audit[4642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616438643332636566353539643664323231613463376130626139 Jan 21 00:59:35.435000 audit: BPF prog-id=190 op=UNLOAD Jan 21 00:59:35.435000 audit[4642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616438643332636566353539643664323231613463376130626139 Jan 21 00:59:35.435000 audit: BPF prog-id=192 op=LOAD Jan 21 00:59:35.435000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4517 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616438643332636566353539643664323231613463376130626139 Jan 21 00:59:35.458140 containerd[2473]: time="2026-01-21T00:59:35.457851906Z" level=info msg="StartContainer for \"2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24\" returns successfully" Jan 21 00:59:35.461924 systemd[1]: cri-containerd-2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24.scope: Deactivated successfully. Jan 21 00:59:35.464729 containerd[2473]: time="2026-01-21T00:59:35.464703300Z" level=info msg="received container exit event container_id:\"2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24\" id:\"2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24\" pid:4654 exited_at:{seconds:1768957175 nanos:464303272}" Jan 21 00:59:35.464000 audit: BPF prog-id=192 op=UNLOAD Jan 21 00:59:35.484974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cad8d32cef559d6d221a4c7a0ba99b26f2079643e27b6601d8818c55590aa24-rootfs.mount: Deactivated successfully. Jan 21 00:59:35.959810 kubelet[3975]: I0121 00:59:35.959116 3975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:59:35.977639 kubelet[3975]: I0121 00:59:35.977514 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55d86bb5dc-nkd9b" podStartSLOduration=2.6540656670000002 podStartE2EDuration="4.977499256s" podCreationTimestamp="2026-01-21 00:59:31 +0000 UTC" firstStartedPulling="2026-01-21 00:59:31.694970959 +0000 UTC m=+19.907699510" lastFinishedPulling="2026-01-21 00:59:34.018404548 +0000 UTC m=+22.231133099" observedRunningTime="2026-01-21 00:59:34.967195961 +0000 UTC m=+23.179924509" watchObservedRunningTime="2026-01-21 00:59:35.977499256 +0000 UTC m=+24.190227806" Jan 21 00:59:36.879717 kubelet[3975]: E0121 00:59:36.879676 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:37.964985 containerd[2473]: time="2026-01-21T00:59:37.964947403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 00:59:38.880149 kubelet[3975]: E0121 00:59:38.880111 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:40.880848 kubelet[3975]: E0121 00:59:40.880811 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:41.504268 containerd[2473]: time="2026-01-21T00:59:41.504226698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:41.506795 containerd[2473]: time="2026-01-21T00:59:41.506754235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70443032" Jan 21 00:59:41.510513 containerd[2473]: time="2026-01-21T00:59:41.510463840Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:41.514239 containerd[2473]: time="2026-01-21T00:59:41.514192546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:41.514757 containerd[2473]: time="2026-01-21T00:59:41.514608242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.549619933s" Jan 21 00:59:41.514757 containerd[2473]: time="2026-01-21T00:59:41.514634624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 21 00:59:41.520755 containerd[2473]: time="2026-01-21T00:59:41.520722338Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 00:59:41.537942 containerd[2473]: time="2026-01-21T00:59:41.537383987Z" level=info msg="Container 0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:41.551731 containerd[2473]: time="2026-01-21T00:59:41.551706290Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7\"" Jan 21 00:59:41.552266 containerd[2473]: time="2026-01-21T00:59:41.552243644Z" level=info msg="StartContainer for \"0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7\"" Jan 21 00:59:41.553577 containerd[2473]: time="2026-01-21T00:59:41.553541610Z" level=info msg="connecting to shim 0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7" address="unix:///run/containerd/s/43f22762c12349fb4ff23dc8c582ac9f133092599894ae4ba6b6435ce42eb2a1" protocol=ttrpc version=3 Jan 21 00:59:41.575984 systemd[1]: Started cri-containerd-0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7.scope - libcontainer container 0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7. Jan 21 00:59:41.626000 audit: BPF prog-id=193 op=LOAD Jan 21 00:59:41.629254 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 21 00:59:41.629307 kernel: audit: type=1334 audit(1768957181.626:591): prog-id=193 op=LOAD Jan 21 00:59:41.636814 kernel: audit: type=1300 audit(1768957181.626:591): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.626000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.640738 kernel: audit: type=1327 audit(1768957181.626:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.643683 kernel: audit: type=1334 audit(1768957181.628:592): prog-id=194 op=LOAD Jan 21 00:59:41.628000 audit: BPF prog-id=194 op=LOAD Jan 21 00:59:41.628000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.649902 kernel: audit: type=1300 audit(1768957181.628:592): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.658818 kernel: audit: type=1327 audit(1768957181.628:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.630000 audit: BPF prog-id=194 op=UNLOAD Jan 21 00:59:41.663790 kernel: audit: type=1334 audit(1768957181.630:593): prog-id=194 op=UNLOAD Jan 21 00:59:41.630000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.674795 kernel: audit: type=1300 audit(1768957181.630:593): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.681850 containerd[2473]: time="2026-01-21T00:59:41.677970624Z" level=info msg="StartContainer for \"0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7\" returns successfully" Jan 21 00:59:41.684304 kernel: audit: type=1327 audit(1768957181.630:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.684374 kernel: audit: type=1334 audit(1768957181.630:594): prog-id=193 op=UNLOAD Jan 21 00:59:41.630000 audit: BPF prog-id=193 op=UNLOAD Jan 21 00:59:41.630000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:41.630000 audit: BPF prog-id=195 op=LOAD Jan 21 00:59:41.630000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4517 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:41.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663639333638616337343361643733643234306262373437623166 Jan 21 00:59:42.879699 kubelet[3975]: E0121 00:59:42.879658 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:42.957734 systemd[1]: cri-containerd-0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7.scope: Deactivated successfully. Jan 21 00:59:42.958133 systemd[1]: cri-containerd-0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7.scope: Consumed 411ms CPU time, 190.8M memory peak, 171.3M written to disk. Jan 21 00:59:42.959000 audit: BPF prog-id=195 op=UNLOAD Jan 21 00:59:42.963558 containerd[2473]: time="2026-01-21T00:59:42.963520791Z" level=info msg="received container exit event container_id:\"0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7\" id:\"0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7\" pid:4713 exited_at:{seconds:1768957182 nanos:962620179}" Jan 21 00:59:42.982967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bf69368ac743ad73d240bb747b1f9f50dbda10f4b8b01b220f8e6c62e7d63b7-rootfs.mount: Deactivated successfully. Jan 21 00:59:43.031603 kubelet[3975]: I0121 00:59:43.031403 3975 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 00:59:43.301052 systemd[1]: Created slice kubepods-besteffort-podf5ae02c8_aa71_4ba8_969f_2dd0209a0e9e.slice - libcontainer container kubepods-besteffort-podf5ae02c8_aa71_4ba8_969f_2dd0209a0e9e.slice. Jan 21 00:59:43.398729 kubelet[3975]: I0121 00:59:43.398691 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e-tigera-ca-bundle\") pod \"calico-kube-controllers-588547dc94-gdj8l\" (UID: \"f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e\") " pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" Jan 21 00:59:43.398729 kubelet[3975]: I0121 00:59:43.398727 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wk6z\" (UniqueName: \"kubernetes.io/projected/f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e-kube-api-access-9wk6z\") pod \"calico-kube-controllers-588547dc94-gdj8l\" (UID: \"f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e\") " pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" Jan 21 00:59:43.447426 systemd[1]: Created slice kubepods-burstable-podda2cfa64_3443_4024_927a_74dc911d349e.slice - libcontainer container kubepods-burstable-podda2cfa64_3443_4024_927a_74dc911d349e.slice. Jan 21 00:59:43.544910 kubelet[3975]: I0121 00:59:43.499365 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da2cfa64-3443-4024-927a-74dc911d349e-config-volume\") pod \"coredns-674b8bbfcf-gb6hs\" (UID: \"da2cfa64-3443-4024-927a-74dc911d349e\") " pod="kube-system/coredns-674b8bbfcf-gb6hs" Jan 21 00:59:43.544910 kubelet[3975]: I0121 00:59:43.499421 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25gw\" (UniqueName: \"kubernetes.io/projected/da2cfa64-3443-4024-927a-74dc911d349e-kube-api-access-f25gw\") pod \"coredns-674b8bbfcf-gb6hs\" (UID: \"da2cfa64-3443-4024-927a-74dc911d349e\") " pod="kube-system/coredns-674b8bbfcf-gb6hs" Jan 21 00:59:43.599976 systemd[1]: Created slice kubepods-besteffort-pode84cc682_e40a_4f02_8f77_493493b8107a.slice - libcontainer container kubepods-besteffort-pode84cc682_e40a_4f02_8f77_493493b8107a.slice. Jan 21 00:59:43.639968 containerd[2473]: time="2026-01-21T00:59:43.639916591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588547dc94-gdj8l,Uid:f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:43.701191 kubelet[3975]: I0121 00:59:43.701124 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxkd\" (UniqueName: \"kubernetes.io/projected/e84cc682-e40a-4f02-8f77-493493b8107a-kube-api-access-grxkd\") pod \"whisker-7945d954fc-dpzg5\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " pod="calico-system/whisker-7945d954fc-dpzg5" Jan 21 00:59:43.701191 kubelet[3975]: I0121 00:59:43.701160 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-ca-bundle\") pod \"whisker-7945d954fc-dpzg5\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " pod="calico-system/whisker-7945d954fc-dpzg5" Jan 21 00:59:43.701353 kubelet[3975]: I0121 00:59:43.701199 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-backend-key-pair\") pod \"whisker-7945d954fc-dpzg5\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " pod="calico-system/whisker-7945d954fc-dpzg5" Jan 21 00:59:43.847217 containerd[2473]: time="2026-01-21T00:59:43.847178270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gb6hs,Uid:da2cfa64-3443-4024-927a-74dc911d349e,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:43.910347 systemd[1]: Created slice kubepods-besteffort-pod346360e9_6dd0_47dd_8091_663997b6e137.slice - libcontainer container kubepods-besteffort-pod346360e9_6dd0_47dd_8091_663997b6e137.slice. Jan 21 00:59:43.911232 containerd[2473]: time="2026-01-21T00:59:43.910997701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7945d954fc-dpzg5,Uid:e84cc682-e40a-4f02-8f77-493493b8107a,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:43.921497 systemd[1]: Created slice kubepods-burstable-pod7ee9a5d5_5f43_44ae_96f1_f6576c01185e.slice - libcontainer container kubepods-burstable-pod7ee9a5d5_5f43_44ae_96f1_f6576c01185e.slice. Jan 21 00:59:43.961377 systemd[1]: Created slice kubepods-besteffort-podfa6a1068_061f_4c26_9e2c_97c6b3c762d5.slice - libcontainer container kubepods-besteffort-podfa6a1068_061f_4c26_9e2c_97c6b3c762d5.slice. Jan 21 00:59:43.973824 systemd[1]: Created slice kubepods-besteffort-pod52de65b6_e239_41f1_ad3a_143641236290.slice - libcontainer container kubepods-besteffort-pod52de65b6_e239_41f1_ad3a_143641236290.slice. Jan 21 00:59:43.997869 containerd[2473]: time="2026-01-21T00:59:43.997822489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 00:59:44.002601 kubelet[3975]: I0121 00:59:44.002566 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52de65b6-e239-41f1-ad3a-143641236290-calico-apiserver-certs\") pod \"calico-apiserver-58bb965959-6mvxn\" (UID: \"52de65b6-e239-41f1-ad3a-143641236290\") " pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" Jan 21 00:59:44.002898 kubelet[3975]: I0121 00:59:44.002611 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/346360e9-6dd0-47dd-8091-663997b6e137-goldmane-ca-bundle\") pod \"goldmane-666569f655-d2xs2\" (UID: \"346360e9-6dd0-47dd-8091-663997b6e137\") " pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.002898 kubelet[3975]: I0121 00:59:44.002632 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee9a5d5-5f43-44ae-96f1-f6576c01185e-config-volume\") pod \"coredns-674b8bbfcf-rptg9\" (UID: \"7ee9a5d5-5f43-44ae-96f1-f6576c01185e\") " pod="kube-system/coredns-674b8bbfcf-rptg9" Jan 21 00:59:44.002898 kubelet[3975]: I0121 00:59:44.002651 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crng\" (UniqueName: \"kubernetes.io/projected/7ee9a5d5-5f43-44ae-96f1-f6576c01185e-kube-api-access-4crng\") pod \"coredns-674b8bbfcf-rptg9\" (UID: \"7ee9a5d5-5f43-44ae-96f1-f6576c01185e\") " pod="kube-system/coredns-674b8bbfcf-rptg9" Jan 21 00:59:44.002898 kubelet[3975]: I0121 00:59:44.002669 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bxt\" (UniqueName: \"kubernetes.io/projected/52de65b6-e239-41f1-ad3a-143641236290-kube-api-access-g9bxt\") pod \"calico-apiserver-58bb965959-6mvxn\" (UID: \"52de65b6-e239-41f1-ad3a-143641236290\") " pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" Jan 21 00:59:44.002898 kubelet[3975]: I0121 00:59:44.002689 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqt6t\" (UniqueName: \"kubernetes.io/projected/346360e9-6dd0-47dd-8091-663997b6e137-kube-api-access-hqt6t\") pod \"goldmane-666569f655-d2xs2\" (UID: \"346360e9-6dd0-47dd-8091-663997b6e137\") " pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.003034 kubelet[3975]: I0121 00:59:44.002710 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/346360e9-6dd0-47dd-8091-663997b6e137-config\") pod \"goldmane-666569f655-d2xs2\" (UID: \"346360e9-6dd0-47dd-8091-663997b6e137\") " pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.003034 kubelet[3975]: I0121 00:59:44.002729 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/346360e9-6dd0-47dd-8091-663997b6e137-goldmane-key-pair\") pod \"goldmane-666569f655-d2xs2\" (UID: \"346360e9-6dd0-47dd-8091-663997b6e137\") " pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.003034 kubelet[3975]: I0121 00:59:44.002748 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa6a1068-061f-4c26-9e2c-97c6b3c762d5-calico-apiserver-certs\") pod \"calico-apiserver-58bb965959-wpbx9\" (UID: \"fa6a1068-061f-4c26-9e2c-97c6b3c762d5\") " pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" Jan 21 00:59:44.003034 kubelet[3975]: I0121 00:59:44.002787 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnf7\" (UniqueName: \"kubernetes.io/projected/fa6a1068-061f-4c26-9e2c-97c6b3c762d5-kube-api-access-gxnf7\") pod \"calico-apiserver-58bb965959-wpbx9\" (UID: \"fa6a1068-061f-4c26-9e2c-97c6b3c762d5\") " pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" Jan 21 00:59:44.061797 containerd[2473]: time="2026-01-21T00:59:44.061687526Z" level=error msg="Failed to destroy network for sandbox \"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.063763 systemd[1]: run-netns-cni\x2da8dc5138\x2d291b\x2d5efc\x2d12e5\x2d557f8b609711.mount: Deactivated successfully. Jan 21 00:59:44.069884 containerd[2473]: time="2026-01-21T00:59:44.069755502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gb6hs,Uid:da2cfa64-3443-4024-927a-74dc911d349e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.070284 containerd[2473]: time="2026-01-21T00:59:44.070261314Z" level=error msg="Failed to destroy network for sandbox \"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.070512 kubelet[3975]: E0121 00:59:44.070409 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.070656 kubelet[3975]: E0121 00:59:44.070492 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gb6hs" Jan 21 00:59:44.070656 kubelet[3975]: E0121 00:59:44.070607 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gb6hs" Jan 21 00:59:44.071017 kubelet[3975]: E0121 00:59:44.070755 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gb6hs_kube-system(da2cfa64-3443-4024-927a-74dc911d349e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gb6hs_kube-system(da2cfa64-3443-4024-927a-74dc911d349e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4efc2c5358c45ab1e733e2cc77576919f24bc1b42584e86fab295a6b0a6d503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gb6hs" podUID="da2cfa64-3443-4024-927a-74dc911d349e" Jan 21 00:59:44.073310 systemd[1]: run-netns-cni\x2d9918c3f5\x2d5578\x2d303e\x2ddb85\x2dd9894c7351f7.mount: Deactivated successfully. Jan 21 00:59:44.076782 containerd[2473]: time="2026-01-21T00:59:44.076737948Z" level=error msg="Failed to destroy network for sandbox \"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.078382 systemd[1]: run-netns-cni\x2df443d1df\x2d3add\x2d7615\x2d1b25\x2d0faef94629dd.mount: Deactivated successfully. Jan 21 00:59:44.096944 containerd[2473]: time="2026-01-21T00:59:44.096910596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588547dc94-gdj8l,Uid:f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.097751 kubelet[3975]: E0121 00:59:44.097724 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.097940 kubelet[3975]: E0121 00:59:44.097843 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" Jan 21 00:59:44.097940 kubelet[3975]: E0121 00:59:44.097866 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" Jan 21 00:59:44.098090 kubelet[3975]: E0121 00:59:44.098005 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1eed8523a9d04b7d38c65be1268eb48e64ca0241e21df2726c99eac4c25ec57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 00:59:44.099432 containerd[2473]: time="2026-01-21T00:59:44.099269452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7945d954fc-dpzg5,Uid:e84cc682-e40a-4f02-8f77-493493b8107a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.099916 kubelet[3975]: E0121 00:59:44.099729 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.099978 kubelet[3975]: E0121 00:59:44.099941 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7945d954fc-dpzg5" Jan 21 00:59:44.099978 kubelet[3975]: E0121 00:59:44.099961 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7945d954fc-dpzg5" Jan 21 00:59:44.100039 kubelet[3975]: E0121 00:59:44.100024 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7945d954fc-dpzg5_calico-system(e84cc682-e40a-4f02-8f77-493493b8107a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7945d954fc-dpzg5_calico-system(e84cc682-e40a-4f02-8f77-493493b8107a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23adf08c5deb27429f836cc4fc6430901299ec702aeea08fe1c8b6951f3316a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7945d954fc-dpzg5" podUID="e84cc682-e40a-4f02-8f77-493493b8107a" Jan 21 00:59:44.221485 containerd[2473]: time="2026-01-21T00:59:44.221299542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-d2xs2,Uid:346360e9-6dd0-47dd-8091-663997b6e137,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:44.228136 containerd[2473]: time="2026-01-21T00:59:44.228109877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rptg9,Uid:7ee9a5d5-5f43-44ae-96f1-f6576c01185e,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:44.268073 containerd[2473]: time="2026-01-21T00:59:44.268013949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-wpbx9,Uid:fa6a1068-061f-4c26-9e2c-97c6b3c762d5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:59:44.283512 containerd[2473]: time="2026-01-21T00:59:44.283465770Z" level=error msg="Failed to destroy network for sandbox \"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.287287 containerd[2473]: time="2026-01-21T00:59:44.286896753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-6mvxn,Uid:52de65b6-e239-41f1-ad3a-143641236290,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:59:44.304546 containerd[2473]: time="2026-01-21T00:59:44.304465737Z" level=error msg="Failed to destroy network for sandbox \"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.325608 containerd[2473]: time="2026-01-21T00:59:44.325567405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-d2xs2,Uid:346360e9-6dd0-47dd-8091-663997b6e137,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.326207 kubelet[3975]: E0121 00:59:44.325752 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.326207 kubelet[3975]: E0121 00:59:44.325813 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.326207 kubelet[3975]: E0121 00:59:44.325838 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-d2xs2" Jan 21 00:59:44.326435 kubelet[3975]: E0121 00:59:44.326134 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d2192b961d47abb57804b67a7ea2838e223319d3c4ca8f01d476b1eefca20f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 00:59:44.331669 containerd[2473]: time="2026-01-21T00:59:44.331580140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rptg9,Uid:7ee9a5d5-5f43-44ae-96f1-f6576c01185e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.332047 kubelet[3975]: E0121 00:59:44.331896 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.332047 kubelet[3975]: E0121 00:59:44.331939 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rptg9" Jan 21 00:59:44.332047 kubelet[3975]: E0121 00:59:44.331959 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rptg9" Jan 21 00:59:44.332179 kubelet[3975]: E0121 00:59:44.332003 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rptg9_kube-system(7ee9a5d5-5f43-44ae-96f1-f6576c01185e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rptg9_kube-system(7ee9a5d5-5f43-44ae-96f1-f6576c01185e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48b305fbd2b1482c0078dd7b594e72d893c7b9aaefe936b6689916575b0468de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rptg9" podUID="7ee9a5d5-5f43-44ae-96f1-f6576c01185e" Jan 21 00:59:44.333009 containerd[2473]: time="2026-01-21T00:59:44.332912961Z" level=error msg="Failed to destroy network for sandbox \"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.340385 containerd[2473]: time="2026-01-21T00:59:44.340337344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-wpbx9,Uid:fa6a1068-061f-4c26-9e2c-97c6b3c762d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.340540 kubelet[3975]: E0121 00:59:44.340514 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.340587 kubelet[3975]: E0121 00:59:44.340572 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" Jan 21 00:59:44.340613 kubelet[3975]: E0121 00:59:44.340593 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" Jan 21 00:59:44.340672 kubelet[3975]: E0121 00:59:44.340650 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a459800966ceb7f093f9cbe41f024846389bf371ab484d3dfa90376f581ed7de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 00:59:44.361009 containerd[2473]: time="2026-01-21T00:59:44.360968660Z" level=error msg="Failed to destroy network for sandbox \"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.365409 containerd[2473]: time="2026-01-21T00:59:44.365370547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-6mvxn,Uid:52de65b6-e239-41f1-ad3a-143641236290,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.365576 kubelet[3975]: E0121 00:59:44.365524 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.365576 kubelet[3975]: E0121 00:59:44.365568 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" Jan 21 00:59:44.365649 kubelet[3975]: E0121 00:59:44.365589 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" Jan 21 00:59:44.365680 kubelet[3975]: E0121 00:59:44.365643 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b248aa8ff7cd15d3b96114e6411bf931fff44d8b286cf5c68e63c163351917d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 00:59:44.884469 systemd[1]: Created slice kubepods-besteffort-podce3bc266_4945_4335_b09f_5dc1a5736d5d.slice - libcontainer container kubepods-besteffort-podce3bc266_4945_4335_b09f_5dc1a5736d5d.slice. Jan 21 00:59:44.886680 containerd[2473]: time="2026-01-21T00:59:44.886643529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b85f,Uid:ce3bc266-4945-4335-b09f-5dc1a5736d5d,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:44.933954 containerd[2473]: time="2026-01-21T00:59:44.933908440Z" level=error msg="Failed to destroy network for sandbox \"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.939279 containerd[2473]: time="2026-01-21T00:59:44.939238036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b85f,Uid:ce3bc266-4945-4335-b09f-5dc1a5736d5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.939569 kubelet[3975]: E0121 00:59:44.939541 3975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:59:44.939629 kubelet[3975]: E0121 00:59:44.939589 3975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:44.939629 kubelet[3975]: E0121 00:59:44.939613 3975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6b85f" Jan 21 00:59:44.939706 kubelet[3975]: E0121 00:59:44.939683 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ba67751e3c45e89fb27159e719ea9a04e5a46b1f0f468586771ca6ea64b5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:50.701908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226204653.mount: Deactivated successfully. Jan 21 00:59:50.735278 containerd[2473]: time="2026-01-21T00:59:50.735236282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:50.738334 containerd[2473]: time="2026-01-21T00:59:50.738306687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 21 00:59:50.740534 containerd[2473]: time="2026-01-21T00:59:50.740483308Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:50.744107 containerd[2473]: time="2026-01-21T00:59:50.744066262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:59:50.744506 containerd[2473]: time="2026-01-21T00:59:50.744343683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.746474554s" Jan 21 00:59:50.744506 containerd[2473]: time="2026-01-21T00:59:50.744373909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 21 00:59:50.759643 containerd[2473]: time="2026-01-21T00:59:50.759616447Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 00:59:50.777806 containerd[2473]: time="2026-01-21T00:59:50.775564338Z" level=info msg="Container a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:50.792638 containerd[2473]: time="2026-01-21T00:59:50.792597905Z" level=info msg="CreateContainer within sandbox \"5fdd1a8f9ea38ef4567d7efb9085629a50bb90bccb97094c4ca148997802e428\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706\"" Jan 21 00:59:50.793930 containerd[2473]: time="2026-01-21T00:59:50.793896256Z" level=info msg="StartContainer for \"a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706\"" Jan 21 00:59:50.795173 containerd[2473]: time="2026-01-21T00:59:50.795145755Z" level=info msg="connecting to shim a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706" address="unix:///run/containerd/s/43f22762c12349fb4ff23dc8c582ac9f133092599894ae4ba6b6435ce42eb2a1" protocol=ttrpc version=3 Jan 21 00:59:50.811952 systemd[1]: Started cri-containerd-a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706.scope - libcontainer container a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706. Jan 21 00:59:50.855000 audit: BPF prog-id=196 op=LOAD Jan 21 00:59:50.857112 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 00:59:50.857194 kernel: audit: type=1334 audit(1768957190.855:597): prog-id=196 op=LOAD Jan 21 00:59:50.855000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.862171 kernel: audit: type=1300 audit(1768957190.855:597): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.866796 kernel: audit: type=1327 audit(1768957190.855:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.869010 kernel: audit: type=1334 audit(1768957190.855:598): prog-id=197 op=LOAD Jan 21 00:59:50.855000 audit: BPF prog-id=197 op=LOAD Jan 21 00:59:50.855000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.873538 kernel: audit: type=1300 audit(1768957190.855:598): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.881802 kernel: audit: type=1327 audit(1768957190.855:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.881869 kernel: audit: type=1334 audit(1768957190.855:599): prog-id=197 op=UNLOAD Jan 21 00:59:50.855000 audit: BPF prog-id=197 op=UNLOAD Jan 21 00:59:50.855000 audit[4969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.896013 kernel: audit: type=1300 audit(1768957190.855:599): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.896074 kernel: audit: type=1327 audit(1768957190.855:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.898205 kernel: audit: type=1334 audit(1768957190.855:600): prog-id=196 op=UNLOAD Jan 21 00:59:50.855000 audit: BPF prog-id=196 op=UNLOAD Jan 21 00:59:50.855000 audit[4969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.855000 audit: BPF prog-id=198 op=LOAD Jan 21 00:59:50.855000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4517 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:50.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353837323537363733373663346630396239363761363135663538 Jan 21 00:59:50.907025 containerd[2473]: time="2026-01-21T00:59:50.906996433Z" level=info msg="StartContainer for \"a558725767376c4f09b967a615f58bc40052e04164b44ff19b1c699f0f995706\" returns successfully" Jan 21 00:59:51.208583 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 00:59:51.208662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 00:59:51.298606 kubelet[3975]: I0121 00:59:51.298497 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vd4m6" podStartSLOduration=1.338308636 podStartE2EDuration="20.298479455s" podCreationTimestamp="2026-01-21 00:59:31 +0000 UTC" firstStartedPulling="2026-01-21 00:59:31.784755055 +0000 UTC m=+19.997483592" lastFinishedPulling="2026-01-21 00:59:50.744925862 +0000 UTC m=+38.957654411" observedRunningTime="2026-01-21 00:59:51.035500932 +0000 UTC m=+39.248229483" watchObservedRunningTime="2026-01-21 00:59:51.298479455 +0000 UTC m=+39.511208004" Jan 21 00:59:51.342959 kubelet[3975]: I0121 00:59:51.342921 3975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxkd\" (UniqueName: \"kubernetes.io/projected/e84cc682-e40a-4f02-8f77-493493b8107a-kube-api-access-grxkd\") pod \"e84cc682-e40a-4f02-8f77-493493b8107a\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " Jan 21 00:59:51.343066 kubelet[3975]: I0121 00:59:51.342977 3975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-ca-bundle\") pod \"e84cc682-e40a-4f02-8f77-493493b8107a\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " Jan 21 00:59:51.343066 kubelet[3975]: I0121 00:59:51.342996 3975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-backend-key-pair\") pod \"e84cc682-e40a-4f02-8f77-493493b8107a\" (UID: \"e84cc682-e40a-4f02-8f77-493493b8107a\") " Jan 21 00:59:51.344947 kubelet[3975]: I0121 00:59:51.344032 3975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e84cc682-e40a-4f02-8f77-493493b8107a" (UID: "e84cc682-e40a-4f02-8f77-493493b8107a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:59:51.348975 kubelet[3975]: I0121 00:59:51.348946 3975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84cc682-e40a-4f02-8f77-493493b8107a-kube-api-access-grxkd" (OuterVolumeSpecName: "kube-api-access-grxkd") pod "e84cc682-e40a-4f02-8f77-493493b8107a" (UID: "e84cc682-e40a-4f02-8f77-493493b8107a"). InnerVolumeSpecName "kube-api-access-grxkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:59:51.349285 kubelet[3975]: I0121 00:59:51.349161 3975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e84cc682-e40a-4f02-8f77-493493b8107a" (UID: "e84cc682-e40a-4f02-8f77-493493b8107a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:59:51.443870 kubelet[3975]: I0121 00:59:51.443840 3975 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-backend-key-pair\") on node \"ci-4547.0.0-n-ed178c4493\" DevicePath \"\"" Jan 21 00:59:51.444003 kubelet[3975]: I0121 00:59:51.443973 3975 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grxkd\" (UniqueName: \"kubernetes.io/projected/e84cc682-e40a-4f02-8f77-493493b8107a-kube-api-access-grxkd\") on node \"ci-4547.0.0-n-ed178c4493\" DevicePath \"\"" Jan 21 00:59:51.444003 kubelet[3975]: I0121 00:59:51.443988 3975 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e84cc682-e40a-4f02-8f77-493493b8107a-whisker-ca-bundle\") on node \"ci-4547.0.0-n-ed178c4493\" DevicePath \"\"" Jan 21 00:59:51.702335 systemd[1]: var-lib-kubelet-pods-e84cc682\x2de40a\x2d4f02\x2d8f77\x2d493493b8107a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrxkd.mount: Deactivated successfully. Jan 21 00:59:51.702437 systemd[1]: var-lib-kubelet-pods-e84cc682\x2de40a\x2d4f02\x2d8f77\x2d493493b8107a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 00:59:51.885107 systemd[1]: Removed slice kubepods-besteffort-pode84cc682_e40a_4f02_8f77_493493b8107a.slice - libcontainer container kubepods-besteffort-pode84cc682_e40a_4f02_8f77_493493b8107a.slice. Jan 21 00:59:52.099318 systemd[1]: Created slice kubepods-besteffort-pod583e9439_b173_47b8_8158_974665ab3f14.slice - libcontainer container kubepods-besteffort-pod583e9439_b173_47b8_8158_974665ab3f14.slice. Jan 21 00:59:52.148346 kubelet[3975]: I0121 00:59:52.148312 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/583e9439-b173-47b8-8158-974665ab3f14-whisker-ca-bundle\") pod \"whisker-65f69864b5-f6kfk\" (UID: \"583e9439-b173-47b8-8158-974665ab3f14\") " pod="calico-system/whisker-65f69864b5-f6kfk" Jan 21 00:59:52.148468 kubelet[3975]: I0121 00:59:52.148355 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/583e9439-b173-47b8-8158-974665ab3f14-whisker-backend-key-pair\") pod \"whisker-65f69864b5-f6kfk\" (UID: \"583e9439-b173-47b8-8158-974665ab3f14\") " pod="calico-system/whisker-65f69864b5-f6kfk" Jan 21 00:59:52.148468 kubelet[3975]: I0121 00:59:52.148375 3975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4rg\" (UniqueName: \"kubernetes.io/projected/583e9439-b173-47b8-8158-974665ab3f14-kube-api-access-hj4rg\") pod \"whisker-65f69864b5-f6kfk\" (UID: \"583e9439-b173-47b8-8158-974665ab3f14\") " pod="calico-system/whisker-65f69864b5-f6kfk" Jan 21 00:59:52.405336 containerd[2473]: time="2026-01-21T00:59:52.405243372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f69864b5-f6kfk,Uid:583e9439-b173-47b8-8158-974665ab3f14,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:52.542287 systemd-networkd[2103]: califbfa8135766: Link UP Jan 21 00:59:52.542462 systemd-networkd[2103]: califbfa8135766: Gained carrier Jan 21 00:59:52.563511 containerd[2473]: 2026-01-21 00:59:52.435 [INFO][5033] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:52.563511 containerd[2473]: 2026-01-21 00:59:52.442 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0 whisker-65f69864b5- calico-system 583e9439-b173-47b8-8158-974665ab3f14 921 0 2026-01-21 00:59:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65f69864b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 whisker-65f69864b5-f6kfk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califbfa8135766 [] [] }} ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-" Jan 21 00:59:52.563511 containerd[2473]: 2026-01-21 00:59:52.442 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.563511 containerd[2473]: 2026-01-21 00:59:52.476 [INFO][5046] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" HandleID="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Workload="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.476 [INFO][5046] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" HandleID="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Workload="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048ee40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"whisker-65f69864b5-f6kfk", "timestamp":"2026-01-21 00:59:52.476669527 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.476 [INFO][5046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.476 [INFO][5046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.476 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.482 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.486 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.490 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.492 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563730 containerd[2473]: 2026-01-21 00:59:52.493 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.493 [INFO][5046] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.495 [INFO][5046] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5 Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.501 [INFO][5046] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.508 [INFO][5046] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.129/26] block=192.168.70.128/26 handle="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.508 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.129/26] handle="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.509 [INFO][5046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:52.563962 containerd[2473]: 2026-01-21 00:59:52.509 [INFO][5046] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.129/26] IPv6=[] ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" HandleID="k8s-pod-network.e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Workload="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.564108 containerd[2473]: 2026-01-21 00:59:52.512 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0", GenerateName:"whisker-65f69864b5-", Namespace:"calico-system", SelfLink:"", UID:"583e9439-b173-47b8-8158-974665ab3f14", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f69864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"whisker-65f69864b5-f6kfk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califbfa8135766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:52.564108 containerd[2473]: 2026-01-21 00:59:52.512 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.129/32] ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.564206 containerd[2473]: 2026-01-21 00:59:52.512 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbfa8135766 ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.564206 containerd[2473]: 2026-01-21 00:59:52.542 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.564256 containerd[2473]: 2026-01-21 00:59:52.544 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0", GenerateName:"whisker-65f69864b5-", Namespace:"calico-system", SelfLink:"", UID:"583e9439-b173-47b8-8158-974665ab3f14", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f69864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5", Pod:"whisker-65f69864b5-f6kfk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califbfa8135766", MAC:"1a:63:32:c0:3a:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:52.564310 containerd[2473]: 2026-01-21 00:59:52.560 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" Namespace="calico-system" Pod="whisker-65f69864b5-f6kfk" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-whisker--65f69864b5--f6kfk-eth0" Jan 21 00:59:52.615890 containerd[2473]: time="2026-01-21T00:59:52.615806126Z" level=info msg="connecting to shim e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5" address="unix:///run/containerd/s/7cc016b7c659cf3fb97a51542548e2bc66d0d16872ac9a511d442ba726f96e4c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:52.655974 systemd[1]: Started cri-containerd-e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5.scope - libcontainer container e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5. Jan 21 00:59:52.677000 audit: BPF prog-id=199 op=LOAD Jan 21 00:59:52.678000 audit: BPF prog-id=200 op=LOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=200 op=UNLOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=201 op=LOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=202 op=LOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=202 op=UNLOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=201 op=UNLOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.678000 audit: BPF prog-id=203 op=LOAD Jan 21 00:59:52.678000 audit[5167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5155 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:52.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539323133353730356463306131303833393236383131666231653838 Jan 21 00:59:52.742827 containerd[2473]: time="2026-01-21T00:59:52.742291818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f69864b5-f6kfk,Uid:583e9439-b173-47b8-8158-974665ab3f14,Namespace:calico-system,Attempt:0,} returns sandbox id \"e92135705dc0a1083926811fb1e8829a94a792d4c35ff69aa62604dc43c87fa5\"" Jan 21 00:59:52.744676 containerd[2473]: time="2026-01-21T00:59:52.744650917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 00:59:53.000798 containerd[2473]: time="2026-01-21T00:59:53.000545133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:53.003379 containerd[2473]: time="2026-01-21T00:59:53.003272561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 00:59:53.003379 containerd[2473]: time="2026-01-21T00:59:53.003301592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:53.003553 kubelet[3975]: E0121 00:59:53.003520 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:59:53.003884 kubelet[3975]: E0121 00:59:53.003569 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:59:53.003914 kubelet[3975]: E0121 00:59:53.003711 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:057c1f4bca9b49eb960e9b36fddec5b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:53.005838 containerd[2473]: time="2026-01-21T00:59:53.005811740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 00:59:53.263153 containerd[2473]: time="2026-01-21T00:59:53.263043409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:53.265502 containerd[2473]: time="2026-01-21T00:59:53.265445288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 00:59:53.265561 containerd[2473]: time="2026-01-21T00:59:53.265532629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:53.265690 kubelet[3975]: E0121 00:59:53.265652 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:59:53.265745 kubelet[3975]: E0121 00:59:53.265703 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:59:53.265919 kubelet[3975]: E0121 00:59:53.265869 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:53.267799 kubelet[3975]: E0121 00:59:53.267737 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 00:59:53.886802 kubelet[3975]: I0121 00:59:53.885549 3975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84cc682-e40a-4f02-8f77-493493b8107a" path="/var/lib/kubelet/pods/e84cc682-e40a-4f02-8f77-493493b8107a/volumes" Jan 21 00:59:54.028306 kubelet[3975]: E0121 00:59:54.027904 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 00:59:54.053000 audit[5221]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=5221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:54.053000 audit[5221]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc47a33070 a2=0 a3=7ffc47a3305c items=0 ppid=4132 pid=5221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:54.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:54.058000 audit[5221]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=5221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:54.058000 audit[5221]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc47a33070 a2=0 a3=0 items=0 ppid=4132 pid=5221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:54.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:54.587912 systemd-networkd[2103]: califbfa8135766: Gained IPv6LL Jan 21 00:59:54.880990 containerd[2473]: time="2026-01-21T00:59:54.880920530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588547dc94-gdj8l,Uid:f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:55.002903 systemd-networkd[2103]: cali12e73a924ae: Link UP Jan 21 00:59:55.003619 systemd-networkd[2103]: cali12e73a924ae: Gained carrier Jan 21 00:59:55.021022 containerd[2473]: 2026-01-21 00:59:54.922 [INFO][5232] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:55.021022 containerd[2473]: 2026-01-21 00:59:54.934 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0 calico-kube-controllers-588547dc94- calico-system f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e 850 0 2026-01-21 00:59:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:588547dc94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 calico-kube-controllers-588547dc94-gdj8l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali12e73a924ae [] [] }} ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-" Jan 21 00:59:55.021022 containerd[2473]: 2026-01-21 00:59:54.934 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.021022 containerd[2473]: 2026-01-21 00:59:54.963 [INFO][5253] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" HandleID="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.963 [INFO][5253] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" HandleID="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"calico-kube-controllers-588547dc94-gdj8l", "timestamp":"2026-01-21 00:59:54.963510543 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.963 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.964 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.964 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.971 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.975 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.979 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.980 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021203 containerd[2473]: 2026-01-21 00:59:54.982 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.982 [INFO][5253] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.983 [INFO][5253] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196 Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.988 [INFO][5253] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.994 [INFO][5253] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.130/26] block=192.168.70.128/26 handle="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.994 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.130/26] handle="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.994 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:55.021698 containerd[2473]: 2026-01-21 00:59:54.994 [INFO][5253] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.130/26] IPv6=[] ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" HandleID="k8s-pod-network.a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.021881 containerd[2473]: 2026-01-21 00:59:54.998 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0", GenerateName:"calico-kube-controllers-588547dc94-", Namespace:"calico-system", SelfLink:"", UID:"f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588547dc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"calico-kube-controllers-588547dc94-gdj8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12e73a924ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:55.021954 containerd[2473]: 2026-01-21 00:59:54.998 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.130/32] ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.021954 containerd[2473]: 2026-01-21 00:59:54.998 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12e73a924ae ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.021954 containerd[2473]: 2026-01-21 00:59:55.006 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.022020 containerd[2473]: 2026-01-21 00:59:55.006 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0", GenerateName:"calico-kube-controllers-588547dc94-", Namespace:"calico-system", SelfLink:"", UID:"f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588547dc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196", Pod:"calico-kube-controllers-588547dc94-gdj8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12e73a924ae", MAC:"16:d3:a2:af:6a:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:55.022077 containerd[2473]: 2026-01-21 00:59:55.019 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" Namespace="calico-system" Pod="calico-kube-controllers-588547dc94-gdj8l" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--kube--controllers--588547dc94--gdj8l-eth0" Jan 21 00:59:55.068229 containerd[2473]: time="2026-01-21T00:59:55.068186284Z" level=info msg="connecting to shim a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196" address="unix:///run/containerd/s/d6b98d7900ccfaaf121fa5e465e11be050f263a8033a5c82c78a347cd07bfc15" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:55.096967 systemd[1]: Started cri-containerd-a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196.scope - libcontainer container a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196. Jan 21 00:59:55.106000 audit: BPF prog-id=204 op=LOAD Jan 21 00:59:55.106000 audit: BPF prog-id=205 op=LOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=205 op=UNLOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=206 op=LOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=207 op=LOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=207 op=UNLOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=206 op=UNLOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.106000 audit: BPF prog-id=208 op=LOAD Jan 21 00:59:55.106000 audit[5288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5277 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656636643062366461396438383262386261303066383233346466 Jan 21 00:59:55.136784 containerd[2473]: time="2026-01-21T00:59:55.136606151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588547dc94-gdj8l,Uid:f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7ef6d0b6da9d882b8ba00f8234dfda33be08db1dd9c72e3af78fc8e6a61b196\"" Jan 21 00:59:55.138991 containerd[2473]: time="2026-01-21T00:59:55.138923563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 00:59:55.381021 containerd[2473]: time="2026-01-21T00:59:55.380981203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:55.391191 containerd[2473]: time="2026-01-21T00:59:55.391112639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 00:59:55.391191 containerd[2473]: time="2026-01-21T00:59:55.391186914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:55.391419 kubelet[3975]: E0121 00:59:55.391371 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:59:55.391754 kubelet[3975]: E0121 00:59:55.391431 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:59:55.392087 kubelet[3975]: E0121 00:59:55.392003 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:55.393283 kubelet[3975]: E0121 00:59:55.393242 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 00:59:55.880978 containerd[2473]: time="2026-01-21T00:59:55.880864030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b85f,Uid:ce3bc266-4945-4335-b09f-5dc1a5736d5d,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:55.986105 systemd-networkd[2103]: cali98df118fc0d: Link UP Jan 21 00:59:55.987641 systemd-networkd[2103]: cali98df118fc0d: Gained carrier Jan 21 00:59:56.003611 containerd[2473]: 2026-01-21 00:59:55.910 [INFO][5315] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:56.003611 containerd[2473]: 2026-01-21 00:59:55.920 [INFO][5315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0 csi-node-driver- calico-system ce3bc266-4945-4335-b09f-5dc1a5736d5d 740 0 2026-01-21 00:59:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 csi-node-driver-6b85f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali98df118fc0d [] [] }} ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-" Jan 21 00:59:56.003611 containerd[2473]: 2026-01-21 00:59:55.920 [INFO][5315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.003611 containerd[2473]: 2026-01-21 00:59:55.946 [INFO][5333] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" HandleID="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Workload="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.946 [INFO][5333] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" HandleID="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Workload="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"csi-node-driver-6b85f", "timestamp":"2026-01-21 00:59:55.94637553 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.946 [INFO][5333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.947 [INFO][5333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.947 [INFO][5333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.952 [INFO][5333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.958 [INFO][5333] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.962 [INFO][5333] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.963 [INFO][5333] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004156 containerd[2473]: 2026-01-21 00:59:55.967 [INFO][5333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.967 [INFO][5333] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.968 [INFO][5333] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88 Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.974 [INFO][5333] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.979 [INFO][5333] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.131/26] block=192.168.70.128/26 handle="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.980 [INFO][5333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.131/26] handle="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.980 [INFO][5333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:56.004481 containerd[2473]: 2026-01-21 00:59:55.980 [INFO][5333] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.131/26] IPv6=[] ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" HandleID="k8s-pod-network.569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Workload="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.004701 containerd[2473]: 2026-01-21 00:59:55.982 [INFO][5315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce3bc266-4945-4335-b09f-5dc1a5736d5d", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"csi-node-driver-6b85f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98df118fc0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:56.004814 containerd[2473]: 2026-01-21 00:59:55.983 [INFO][5315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.131/32] ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.004814 containerd[2473]: 2026-01-21 00:59:55.983 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98df118fc0d ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.004814 containerd[2473]: 2026-01-21 00:59:55.987 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.004923 containerd[2473]: 2026-01-21 00:59:55.988 [INFO][5315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce3bc266-4945-4335-b09f-5dc1a5736d5d", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88", Pod:"csi-node-driver-6b85f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98df118fc0d", MAC:"1e:8f:0f:4b:84:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:56.005016 containerd[2473]: 2026-01-21 00:59:56.000 [INFO][5315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" Namespace="calico-system" Pod="csi-node-driver-6b85f" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-csi--node--driver--6b85f-eth0" Jan 21 00:59:56.036184 kubelet[3975]: E0121 00:59:56.035182 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 00:59:56.042827 containerd[2473]: time="2026-01-21T00:59:56.042763187Z" level=info msg="connecting to shim 569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88" address="unix:///run/containerd/s/572072e163a0d456863578677e02612199cc225ed5ec949dbc6b36228217a759" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:56.075961 systemd[1]: Started cri-containerd-569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88.scope - libcontainer container 569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88. Jan 21 00:59:56.094000 audit: BPF prog-id=209 op=LOAD Jan 21 00:59:56.096604 kernel: kauditd_printk_skb: 55 callbacks suppressed Jan 21 00:59:56.096676 kernel: audit: type=1334 audit(1768957196.094:620): prog-id=209 op=LOAD Jan 21 00:59:56.098000 audit: BPF prog-id=210 op=LOAD Jan 21 00:59:56.102796 kernel: audit: type=1334 audit(1768957196.098:621): prog-id=210 op=LOAD Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.108830 kernel: audit: type=1300 audit(1768957196.098:621): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.115802 kernel: audit: type=1327 audit(1768957196.098:621): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: BPF prog-id=210 op=UNLOAD Jan 21 00:59:56.122524 kernel: audit: type=1334 audit(1768957196.098:622): prog-id=210 op=UNLOAD Jan 21 00:59:56.122605 kernel: audit: type=1300 audit(1768957196.098:622): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.127427 kernel: audit: type=1327 audit(1768957196.098:622): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.129247 kernel: audit: type=1334 audit(1768957196.098:623): prog-id=211 op=LOAD Jan 21 00:59:56.098000 audit: BPF prog-id=211 op=LOAD Jan 21 00:59:56.134214 kernel: audit: type=1300 audit(1768957196.098:623): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.139782 kernel: audit: type=1327 audit(1768957196.098:623): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: BPF prog-id=212 op=LOAD Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: BPF prog-id=212 op=UNLOAD Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: BPF prog-id=211 op=UNLOAD Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.098000 audit: BPF prog-id=213 op=LOAD Jan 21 00:59:56.098000 audit[5366]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5355 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:56.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536396636643631343531386336643862613838383066313362353130 Jan 21 00:59:56.140564 containerd[2473]: time="2026-01-21T00:59:56.140533107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b85f,Uid:ce3bc266-4945-4335-b09f-5dc1a5736d5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"569f6d614518c6d8ba8880f13b5102daef99826a66e5282c9bfce3d947ba4d88\"" Jan 21 00:59:56.149008 containerd[2473]: time="2026-01-21T00:59:56.148708786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 00:59:56.398151 containerd[2473]: time="2026-01-21T00:59:56.398016250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:56.400627 containerd[2473]: time="2026-01-21T00:59:56.400552152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 00:59:56.400627 containerd[2473]: time="2026-01-21T00:59:56.400584900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:56.400829 kubelet[3975]: E0121 00:59:56.400797 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:59:56.401130 kubelet[3975]: E0121 00:59:56.400841 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:59:56.401130 kubelet[3975]: E0121 00:59:56.400995 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:56.403132 containerd[2473]: time="2026-01-21T00:59:56.403107636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 00:59:56.655280 containerd[2473]: time="2026-01-21T00:59:56.655175052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:56.658434 containerd[2473]: time="2026-01-21T00:59:56.658333650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 00:59:56.658434 containerd[2473]: time="2026-01-21T00:59:56.658360323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:56.658644 kubelet[3975]: E0121 00:59:56.658602 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:59:56.658700 kubelet[3975]: E0121 00:59:56.658660 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:59:56.658891 kubelet[3975]: E0121 00:59:56.658809 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:56.660110 kubelet[3975]: E0121 00:59:56.660073 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:56.763920 systemd-networkd[2103]: cali12e73a924ae: Gained IPv6LL Jan 21 00:59:56.880154 containerd[2473]: time="2026-01-21T00:59:56.880100780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gb6hs,Uid:da2cfa64-3443-4024-927a-74dc911d349e,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:56.880323 containerd[2473]: time="2026-01-21T00:59:56.880100781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-6mvxn,Uid:52de65b6-e239-41f1-ad3a-143641236290,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:59:56.991718 systemd-networkd[2103]: calia2ef8df133c: Link UP Jan 21 00:59:56.992600 systemd-networkd[2103]: calia2ef8df133c: Gained carrier Jan 21 00:59:57.012170 containerd[2473]: 2026-01-21 00:59:56.921 [INFO][5406] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:57.012170 containerd[2473]: 2026-01-21 00:59:56.932 [INFO][5406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0 coredns-674b8bbfcf- kube-system da2cfa64-3443-4024-927a-74dc911d349e 851 0 2026-01-21 00:59:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 coredns-674b8bbfcf-gb6hs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2ef8df133c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-" Jan 21 00:59:57.012170 containerd[2473]: 2026-01-21 00:59:56.932 [INFO][5406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012170 containerd[2473]: 2026-01-21 00:59:56.959 [INFO][5435] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" HandleID="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.960 [INFO][5435] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" HandleID="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"coredns-674b8bbfcf-gb6hs", "timestamp":"2026-01-21 00:59:56.95971011 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.960 [INFO][5435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.960 [INFO][5435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.960 [INFO][5435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.964 [INFO][5435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.967 [INFO][5435] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.970 [INFO][5435] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.971 [INFO][5435] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012548 containerd[2473]: 2026-01-21 00:59:56.973 [INFO][5435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.973 [INFO][5435] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.974 [INFO][5435] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4 Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.978 [INFO][5435] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.984 [INFO][5435] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.132/26] block=192.168.70.128/26 handle="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.985 [INFO][5435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.132/26] handle="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.985 [INFO][5435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:57.012762 containerd[2473]: 2026-01-21 00:59:56.985 [INFO][5435] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.132/26] IPv6=[] ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" HandleID="k8s-pod-network.62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:56.987 [INFO][5406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da2cfa64-3443-4024-927a-74dc911d349e", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"coredns-674b8bbfcf-gb6hs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ef8df133c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:56.987 [INFO][5406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.132/32] ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:56.987 [INFO][5406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2ef8df133c ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:56.992 [INFO][5406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:56.993 [INFO][5406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da2cfa64-3443-4024-927a-74dc911d349e", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4", Pod:"coredns-674b8bbfcf-gb6hs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ef8df133c", MAC:"ae:63:4b:c4:4f:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:57.012925 containerd[2473]: 2026-01-21 00:59:57.010 [INFO][5406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-gb6hs" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--gb6hs-eth0" Jan 21 00:59:57.038734 kubelet[3975]: E0121 00:59:57.038673 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 00:59:57.041643 kubelet[3975]: E0121 00:59:57.041608 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:57.088631 containerd[2473]: time="2026-01-21T00:59:57.088595224Z" level=info msg="connecting to shim 62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4" address="unix:///run/containerd/s/aededa014349a129f048fe7602cf06092880c267c88c23941a874e6f6947522b" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:57.118114 systemd-networkd[2103]: cali0018be24eac: Link UP Jan 21 00:59:57.119011 systemd-networkd[2103]: cali0018be24eac: Gained carrier Jan 21 00:59:57.120193 systemd[1]: Started cri-containerd-62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4.scope - libcontainer container 62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4. Jan 21 00:59:57.135000 audit: BPF prog-id=214 op=LOAD Jan 21 00:59:57.135000 audit: BPF prog-id=215 op=LOAD Jan 21 00:59:57.135000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.135000 audit: BPF prog-id=215 op=UNLOAD Jan 21 00:59:57.135000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.137000 audit: BPF prog-id=216 op=LOAD Jan 21 00:59:57.137000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.137000 audit: BPF prog-id=217 op=LOAD Jan 21 00:59:57.137000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.137000 audit: BPF prog-id=217 op=UNLOAD Jan 21 00:59:57.137000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.137000 audit: BPF prog-id=216 op=UNLOAD Jan 21 00:59:57.137000 audit[5481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.137000 audit: BPF prog-id=218 op=LOAD Jan 21 00:59:57.137000 audit[5481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5469 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632633035653234316666336430363137396563663565613666396261 Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.922 [INFO][5410] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.931 [INFO][5410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0 calico-apiserver-58bb965959- calico-apiserver 52de65b6-e239-41f1-ad3a-143641236290 858 0 2026-01-21 00:59:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bb965959 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 calico-apiserver-58bb965959-6mvxn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0018be24eac [] [] }} ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.932 [INFO][5410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.961 [INFO][5434] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" HandleID="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.961 [INFO][5434] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" HandleID="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-ed178c4493", "pod":"calico-apiserver-58bb965959-6mvxn", "timestamp":"2026-01-21 00:59:56.961423864 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.962 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.985 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:56.985 [INFO][5434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.066 [INFO][5434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.084 [INFO][5434] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.089 [INFO][5434] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.090 [INFO][5434] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.093 [INFO][5434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.093 [INFO][5434] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.095 [INFO][5434] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623 Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.102 [INFO][5434] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.113 [INFO][5434] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.133/26] block=192.168.70.128/26 handle="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.113 [INFO][5434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.133/26] handle="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.113 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:57.140582 containerd[2473]: 2026-01-21 00:59:57.113 [INFO][5434] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.133/26] IPv6=[] ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" HandleID="k8s-pod-network.cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.116 [INFO][5410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0", GenerateName:"calico-apiserver-58bb965959-", Namespace:"calico-apiserver", SelfLink:"", UID:"52de65b6-e239-41f1-ad3a-143641236290", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bb965959", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"calico-apiserver-58bb965959-6mvxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0018be24eac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.116 [INFO][5410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.133/32] ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.116 [INFO][5410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0018be24eac ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.118 [INFO][5410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.119 [INFO][5410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0", GenerateName:"calico-apiserver-58bb965959-", Namespace:"calico-apiserver", SelfLink:"", UID:"52de65b6-e239-41f1-ad3a-143641236290", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bb965959", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623", Pod:"calico-apiserver-58bb965959-6mvxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0018be24eac", MAC:"56:f2:fc:83:53:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:57.141644 containerd[2473]: 2026-01-21 00:59:57.137 [INFO][5410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-6mvxn" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--6mvxn-eth0" Jan 21 00:59:57.147848 systemd-networkd[2103]: cali98df118fc0d: Gained IPv6LL Jan 21 00:59:57.176083 containerd[2473]: time="2026-01-21T00:59:57.176053619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gb6hs,Uid:da2cfa64-3443-4024-927a-74dc911d349e,Namespace:kube-system,Attempt:0,} returns sandbox id \"62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4\"" Jan 21 00:59:57.182736 containerd[2473]: time="2026-01-21T00:59:57.182705503Z" level=info msg="connecting to shim cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623" address="unix:///run/containerd/s/81c8c18894dacac8dde875d68ce5ef4fd48299ce5644b1d100937d8a53dbe8e4" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:57.186495 containerd[2473]: time="2026-01-21T00:59:57.186470375Z" level=info msg="CreateContainer within sandbox \"62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 00:59:57.211251 containerd[2473]: time="2026-01-21T00:59:57.211185907Z" level=info msg="Container 274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:57.223629 containerd[2473]: time="2026-01-21T00:59:57.223595138Z" level=info msg="CreateContainer within sandbox \"62c05e241ff3d06179ecf5ea6f9baa597a874f58ca4c56ef8daf5f95574668b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e\"" Jan 21 00:59:57.225127 containerd[2473]: time="2026-01-21T00:59:57.225079246Z" level=info msg="StartContainer for \"274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e\"" Jan 21 00:59:57.227807 containerd[2473]: time="2026-01-21T00:59:57.227673076Z" level=info msg="connecting to shim 274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e" address="unix:///run/containerd/s/aededa014349a129f048fe7602cf06092880c267c88c23941a874e6f6947522b" protocol=ttrpc version=3 Jan 21 00:59:57.230117 systemd[1]: Started cri-containerd-cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623.scope - libcontainer container cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623. Jan 21 00:59:57.247299 systemd[1]: Started cri-containerd-274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e.scope - libcontainer container 274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e. Jan 21 00:59:57.252000 audit: BPF prog-id=219 op=LOAD Jan 21 00:59:57.253000 audit: BPF prog-id=220 op=LOAD Jan 21 00:59:57.253000 audit[5537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=220 op=UNLOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=221 op=LOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=222 op=LOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=222 op=UNLOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=221 op=UNLOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.254000 audit: BPF prog-id=223 op=LOAD Jan 21 00:59:57.254000 audit[5537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5520 pid=5537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326462653962353332363939626132343036393533303738383838 Jan 21 00:59:57.263000 audit: BPF prog-id=224 op=LOAD Jan 21 00:59:57.264000 audit: BPF prog-id=225 op=LOAD Jan 21 00:59:57.264000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.264000 audit: BPF prog-id=225 op=UNLOAD Jan 21 00:59:57.264000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.265000 audit: BPF prog-id=226 op=LOAD Jan 21 00:59:57.265000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.265000 audit: BPF prog-id=227 op=LOAD Jan 21 00:59:57.265000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.265000 audit: BPF prog-id=227 op=UNLOAD Jan 21 00:59:57.265000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.266000 audit: BPF prog-id=226 op=UNLOAD Jan 21 00:59:57.266000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.266000 audit: BPF prog-id=228 op=LOAD Jan 21 00:59:57.266000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5469 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237346463646430326233323538373961343966353061323964343466 Jan 21 00:59:57.295357 containerd[2473]: time="2026-01-21T00:59:57.295257041Z" level=info msg="StartContainer for \"274dcdd02b325879a49f50a29d44f4b2d37f0057e20943ec9314950085e6593e\" returns successfully" Jan 21 00:59:57.326646 containerd[2473]: time="2026-01-21T00:59:57.326606017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-6mvxn,Uid:52de65b6-e239-41f1-ad3a-143641236290,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc2dbe9b532699ba2406953078888a4b80f4b1471274d3b8c145e6cbbc7ee623\"" Jan 21 00:59:57.329164 containerd[2473]: time="2026-01-21T00:59:57.329119990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:59:57.559885 kubelet[3975]: I0121 00:59:57.559760 3975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:59:57.575348 containerd[2473]: time="2026-01-21T00:59:57.575169293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:57.578429 containerd[2473]: time="2026-01-21T00:59:57.578355931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:59:57.578520 containerd[2473]: time="2026-01-21T00:59:57.578464406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:57.578664 kubelet[3975]: E0121 00:59:57.578624 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:57.578704 kubelet[3975]: E0121 00:59:57.578679 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:57.578886 kubelet[3975]: E0121 00:59:57.578850 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9bxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:57.581097 kubelet[3975]: E0121 00:59:57.581023 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 00:59:57.598000 audit[5609]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=5609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:57.598000 audit[5609]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc73085c0 a2=0 a3=7ffdc73085ac items=0 ppid=4132 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:57.603000 audit[5609]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=5609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:57.603000 audit[5609]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdc73085c0 a2=0 a3=7ffdc73085ac items=0 ppid=4132 pid=5609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:57.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:57.881619 containerd[2473]: time="2026-01-21T00:59:57.881557933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-wpbx9,Uid:fa6a1068-061f-4c26-9e2c-97c6b3c762d5,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:59:57.881910 containerd[2473]: time="2026-01-21T00:59:57.881878969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rptg9,Uid:7ee9a5d5-5f43-44ae-96f1-f6576c01185e,Namespace:kube-system,Attempt:0,}" Jan 21 00:59:58.043206 kubelet[3975]: E0121 00:59:58.043101 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 00:59:58.048257 systemd-networkd[2103]: calic99135bd72c: Link UP Jan 21 00:59:58.049947 systemd-networkd[2103]: calic99135bd72c: Gained carrier Jan 21 00:59:58.057824 kubelet[3975]: E0121 00:59:58.057382 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.930 [INFO][5615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.946 [INFO][5615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0 calico-apiserver-58bb965959- calico-apiserver fa6a1068-061f-4c26-9e2c-97c6b3c762d5 857 0 2026-01-21 00:59:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bb965959 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 calico-apiserver-58bb965959-wpbx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic99135bd72c [] [] }} ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.946 [INFO][5615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.996 [INFO][5642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" HandleID="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.997 [INFO][5642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" HandleID="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-n-ed178c4493", "pod":"calico-apiserver-58bb965959-wpbx9", "timestamp":"2026-01-21 00:59:57.996750376 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.997 [INFO][5642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.997 [INFO][5642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:57.997 [INFO][5642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.007 [INFO][5642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.009 [INFO][5642] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.016 [INFO][5642] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.018 [INFO][5642] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.021 [INFO][5642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.022 [INFO][5642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.023 [INFO][5642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256 Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.027 [INFO][5642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.035 [INFO][5642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.134/26] block=192.168.70.128/26 handle="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.035 [INFO][5642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.134/26] handle="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.035 [INFO][5642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:58.072815 containerd[2473]: 2026-01-21 00:59:58.035 [INFO][5642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.134/26] IPv6=[] ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" HandleID="k8s-pod-network.9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Workload="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.042 [INFO][5615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0", GenerateName:"calico-apiserver-58bb965959-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6a1068-061f-4c26-9e2c-97c6b3c762d5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bb965959", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"calico-apiserver-58bb965959-wpbx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic99135bd72c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.043 [INFO][5615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.134/32] ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.043 [INFO][5615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic99135bd72c ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.049 [INFO][5615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.051 [INFO][5615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0", GenerateName:"calico-apiserver-58bb965959-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6a1068-061f-4c26-9e2c-97c6b3c762d5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bb965959", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256", Pod:"calico-apiserver-58bb965959-wpbx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic99135bd72c", MAC:"4a:8e:4b:06:7e:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:58.073531 containerd[2473]: 2026-01-21 00:59:58.066 [INFO][5615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" Namespace="calico-apiserver" Pod="calico-apiserver-58bb965959-wpbx9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-calico--apiserver--58bb965959--wpbx9-eth0" Jan 21 00:59:58.107000 audit[5668]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:58.107000 audit[5668]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe922bf8b0 a2=0 a3=7ffe922bf89c items=0 ppid=4132 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:58.113868 kubelet[3975]: I0121 00:59:58.113820 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gb6hs" podStartSLOduration=41.113805918 podStartE2EDuration="41.113805918s" podCreationTimestamp="2026-01-21 00:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:58.113597276 +0000 UTC m=+46.326325826" watchObservedRunningTime="2026-01-21 00:59:58.113805918 +0000 UTC m=+46.326534468" Jan 21 00:59:58.123000 audit[5668]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=5668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:58.123000 audit[5668]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe922bf8b0 a2=0 a3=0 items=0 ppid=4132 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:58.142841 containerd[2473]: time="2026-01-21T00:59:58.142606327Z" level=info msg="connecting to shim 9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256" address="unix:///run/containerd/s/171740b7c8e3ffc94edb3b8e16aa459a9f44b16533766b38e8bdbbbca8256e86" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:58.184098 systemd[1]: Started cri-containerd-9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256.scope - libcontainer container 9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256. Jan 21 00:59:58.184000 audit: BPF prog-id=229 op=LOAD Jan 21 00:59:58.184000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefc36150 a2=98 a3=1fffffffffffffff items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.185000 audit: BPF prog-id=229 op=UNLOAD Jan 21 00:59:58.185000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeefc36120 a3=0 items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.185000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.185000 audit: BPF prog-id=230 op=LOAD Jan 21 00:59:58.185000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefc36030 a2=94 a3=3 items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.185000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.185000 audit: BPF prog-id=230 op=UNLOAD Jan 21 00:59:58.185000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeefc36030 a2=94 a3=3 items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.185000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.185000 audit: BPF prog-id=231 op=LOAD Jan 21 00:59:58.185000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefc36070 a2=94 a3=7ffeefc36250 items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.185000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.185000 audit: BPF prog-id=231 op=UNLOAD Jan 21 00:59:58.185000 audit[5713]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeefc36070 a2=94 a3=7ffeefc36250 items=0 ppid=5610 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.185000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:59:58.186000 audit: BPF prog-id=232 op=LOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffccece5250 a2=98 a3=3 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.186000 audit: BPF prog-id=232 op=UNLOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffccece5220 a3=0 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.186000 audit: BPF prog-id=233 op=LOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccece5040 a2=94 a3=54428f items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.186000 audit: BPF prog-id=233 op=UNLOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccece5040 a2=94 a3=54428f items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.186000 audit: BPF prog-id=234 op=LOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccece5070 a2=94 a3=2 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.186000 audit: BPF prog-id=234 op=UNLOAD Jan 21 00:59:58.186000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccece5070 a2=0 a3=2 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.186000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.190699 systemd-networkd[2103]: califd5f2e65518: Link UP Jan 21 00:59:58.193924 systemd-networkd[2103]: califd5f2e65518: Gained carrier Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:57.947 [INFO][5627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:57.960 [INFO][5627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0 coredns-674b8bbfcf- kube-system 7ee9a5d5-5f43-44ae-96f1-f6576c01185e 856 0 2026-01-21 00:59:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 coredns-674b8bbfcf-rptg9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd5f2e65518 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:57.960 [INFO][5627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.008 [INFO][5649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" HandleID="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.009 [INFO][5649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" HandleID="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"coredns-674b8bbfcf-rptg9", "timestamp":"2026-01-21 00:59:58.008404218 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.011 [INFO][5649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.035 [INFO][5649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.036 [INFO][5649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.107 [INFO][5649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.129 [INFO][5649] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.142 [INFO][5649] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.147 [INFO][5649] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.153 [INFO][5649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.153 [INFO][5649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.161 [INFO][5649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.170 [INFO][5649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.182 [INFO][5649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.135/26] block=192.168.70.128/26 handle="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.183 [INFO][5649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.135/26] handle="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.183 [INFO][5649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:58.213959 containerd[2473]: 2026-01-21 00:59:58.183 [INFO][5649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.135/26] IPv6=[] ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" HandleID="k8s-pod-network.b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Workload="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.188 [INFO][5627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ee9a5d5-5f43-44ae-96f1-f6576c01185e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"coredns-674b8bbfcf-rptg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd5f2e65518", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.189 [INFO][5627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.135/32] ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.189 [INFO][5627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd5f2e65518 ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.192 [INFO][5627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.192 [INFO][5627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ee9a5d5-5f43-44ae-96f1-f6576c01185e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a", Pod:"coredns-674b8bbfcf-rptg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd5f2e65518", MAC:"7a:62:d3:23:2a:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:58.214489 containerd[2473]: 2026-01-21 00:59:58.208 [INFO][5627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" Namespace="kube-system" Pod="coredns-674b8bbfcf-rptg9" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-coredns--674b8bbfcf--rptg9-eth0" Jan 21 00:59:58.224000 audit: BPF prog-id=235 op=LOAD Jan 21 00:59:58.225000 audit: BPF prog-id=236 op=LOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=236 op=UNLOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=237 op=LOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=238 op=LOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=238 op=UNLOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=237 op=UNLOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.225000 audit: BPF prog-id=239 op=LOAD Jan 21 00:59:58.225000 audit[5696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=5681 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653038356364633138383935623062383065613134336135306231 Jan 21 00:59:58.235876 systemd-networkd[2103]: cali0018be24eac: Gained IPv6LL Jan 21 00:59:58.259612 containerd[2473]: time="2026-01-21T00:59:58.258159832Z" level=info msg="connecting to shim b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a" address="unix:///run/containerd/s/2570237a740cd28f0f3933104f6190bd1b5659c9e4c40565f5eff302779ca44e" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:58.298946 systemd[1]: Started cri-containerd-b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a.scope - libcontainer container b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a. Jan 21 00:59:58.314000 audit: BPF prog-id=240 op=LOAD Jan 21 00:59:58.314000 audit: BPF prog-id=241 op=LOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=241 op=UNLOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=242 op=LOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=243 op=LOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=243 op=UNLOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=242 op=UNLOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.314000 audit: BPF prog-id=244 op=LOAD Jan 21 00:59:58.314000 audit[5749]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5737 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313664633830383230313364343162613363616433396136653534 Jan 21 00:59:58.368598 containerd[2473]: time="2026-01-21T00:59:58.368561365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rptg9,Uid:7ee9a5d5-5f43-44ae-96f1-f6576c01185e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a\"" Jan 21 00:59:58.392589 containerd[2473]: time="2026-01-21T00:59:58.392556489Z" level=info msg="CreateContainer within sandbox \"b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 00:59:58.412930 containerd[2473]: time="2026-01-21T00:59:58.412814781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bb965959-wpbx9,Uid:fa6a1068-061f-4c26-9e2c-97c6b3c762d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9ce085cdc18895b0b80ea143a50b1f84fe1c428981b945cafe7f5af6d8284256\"" Jan 21 00:59:58.417347 containerd[2473]: time="2026-01-21T00:59:58.417324071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:59:58.450521 containerd[2473]: time="2026-01-21T00:59:58.450495073Z" level=info msg="Container cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:59:58.464792 containerd[2473]: time="2026-01-21T00:59:58.464473990Z" level=info msg="CreateContainer within sandbox \"b716dc8082013d41ba3cad39a6e54aff07dcb259b8486a0d8248287d6011ab4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63\"" Jan 21 00:59:58.466356 containerd[2473]: time="2026-01-21T00:59:58.466331916Z" level=info msg="StartContainer for \"cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63\"" Jan 21 00:59:58.467761 containerd[2473]: time="2026-01-21T00:59:58.467736755Z" level=info msg="connecting to shim cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63" address="unix:///run/containerd/s/2570237a740cd28f0f3933104f6190bd1b5659c9e4c40565f5eff302779ca44e" protocol=ttrpc version=3 Jan 21 00:59:58.495968 systemd[1]: Started cri-containerd-cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63.scope - libcontainer container cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63. Jan 21 00:59:58.509000 audit: BPF prog-id=245 op=LOAD Jan 21 00:59:58.510000 audit: BPF prog-id=246 op=LOAD Jan 21 00:59:58.510000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.510000 audit: BPF prog-id=246 op=UNLOAD Jan 21 00:59:58.510000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.510000 audit: BPF prog-id=247 op=LOAD Jan 21 00:59:58.510000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.510000 audit: BPF prog-id=248 op=LOAD Jan 21 00:59:58.510000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.510000 audit: BPF prog-id=248 op=UNLOAD Jan 21 00:59:58.510000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.511000 audit: BPF prog-id=247 op=UNLOAD Jan 21 00:59:58.511000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.511000 audit: BPF prog-id=249 op=LOAD Jan 21 00:59:58.511000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5737 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656433343764616139316338333265373039303263333636346162 Jan 21 00:59:58.544068 containerd[2473]: time="2026-01-21T00:59:58.544030882Z" level=info msg="StartContainer for \"cfed347daa91c832e70902c3664abc95438959a6d3acc254c0db169bc0743e63\" returns successfully" Jan 21 00:59:58.607000 audit: BPF prog-id=250 op=LOAD Jan 21 00:59:58.607000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccece4f30 a2=94 a3=1 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.607000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.607000 audit: BPF prog-id=250 op=UNLOAD Jan 21 00:59:58.607000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccece4f30 a2=94 a3=1 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.607000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.638000 audit: BPF prog-id=251 op=LOAD Jan 21 00:59:58.638000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccece4f20 a2=94 a3=4 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.638000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.638000 audit: BPF prog-id=251 op=UNLOAD Jan 21 00:59:58.638000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffccece4f20 a2=0 a3=4 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.638000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.639000 audit: BPF prog-id=252 op=LOAD Jan 21 00:59:58.639000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffccece4d80 a2=94 a3=5 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.639000 audit: BPF prog-id=252 op=UNLOAD Jan 21 00:59:58.639000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffccece4d80 a2=0 a3=5 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.639000 audit: BPF prog-id=253 op=LOAD Jan 21 00:59:58.639000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccece4fa0 a2=94 a3=6 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.639000 audit: BPF prog-id=253 op=UNLOAD Jan 21 00:59:58.639000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffccece4fa0 a2=0 a3=6 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.640000 audit: BPF prog-id=254 op=LOAD Jan 21 00:59:58.640000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccece4750 a2=94 a3=88 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.640000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.641000 audit: BPF prog-id=255 op=LOAD Jan 21 00:59:58.641000 audit[5714]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffccece45d0 a2=94 a3=2 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.641000 audit: BPF prog-id=255 op=UNLOAD Jan 21 00:59:58.641000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffccece4600 a2=0 a3=7ffccece4700 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.642000 audit: BPF prog-id=254 op=UNLOAD Jan 21 00:59:58.642000 audit[5714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=f769d10 a2=0 a3=99c8938601701db3 items=0 ppid=5610 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:59:58.654000 audit: BPF prog-id=256 op=LOAD Jan 21 00:59:58.654000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee2065f70 a2=98 a3=1999999999999999 items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.655000 audit: BPF prog-id=256 op=UNLOAD Jan 21 00:59:58.655000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee2065f40 a3=0 items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.655000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.655000 audit: BPF prog-id=257 op=LOAD Jan 21 00:59:58.655000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee2065e50 a2=94 a3=ffff items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.655000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.655000 audit: BPF prog-id=257 op=UNLOAD Jan 21 00:59:58.655000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee2065e50 a2=94 a3=ffff items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.655000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.655000 audit: BPF prog-id=258 op=LOAD Jan 21 00:59:58.655000 audit[5848]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee2065e90 a2=94 a3=7ffee2066070 items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.655000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.655000 audit: BPF prog-id=258 op=UNLOAD Jan 21 00:59:58.655000 audit[5848]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee2065e90 a2=94 a3=7ffee2066070 items=0 ppid=5610 pid=5848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.655000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:59:58.691704 containerd[2473]: time="2026-01-21T00:59:58.691226325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:58.693975 containerd[2473]: time="2026-01-21T00:59:58.693947932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:59:58.694154 containerd[2473]: time="2026-01-21T00:59:58.694059265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:58.694365 kubelet[3975]: E0121 00:59:58.694290 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:58.694365 kubelet[3975]: E0121 00:59:58.694326 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:58.694761 kubelet[3975]: E0121 00:59:58.694710 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxnf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:58.695957 kubelet[3975]: E0121 00:59:58.695913 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 00:59:58.764000 audit: BPF prog-id=259 op=LOAD Jan 21 00:59:58.764000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6bb5b970 a2=98 a3=0 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.764000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=259 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6bb5b940 a3=0 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=260 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6bb5b780 a2=94 a3=54428f items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=260 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6bb5b780 a2=94 a3=54428f items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=261 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6bb5b7b0 a2=94 a3=2 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=261 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6bb5b7b0 a2=0 a3=2 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=262 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6bb5b560 a2=94 a3=4 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=262 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6bb5b560 a2=94 a3=4 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=263 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6bb5b660 a2=94 a3=7fff6bb5b7e0 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=263 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6bb5b660 a2=0 a3=7fff6bb5b7e0 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=264 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6bb5ad90 a2=94 a3=2 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=264 op=UNLOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6bb5ad90 a2=0 a3=2 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.765000 audit: BPF prog-id=265 op=LOAD Jan 21 00:59:58.765000 audit[5870]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6bb5ae90 a2=94 a3=30 items=0 ppid=5610 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.765000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:59:58.779957 systemd-networkd[2103]: vxlan.calico: Link UP Jan 21 00:59:58.779964 systemd-networkd[2103]: vxlan.calico: Gained carrier Jan 21 00:59:58.787000 audit: BPF prog-id=266 op=LOAD Jan 21 00:59:58.787000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd8430e80 a2=98 a3=0 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.787000 audit: BPF prog-id=266 op=UNLOAD Jan 21 00:59:58.787000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd8430e50 a3=0 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.788000 audit: BPF prog-id=267 op=LOAD Jan 21 00:59:58.788000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd8430c70 a2=94 a3=54428f items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.788000 audit: BPF prog-id=267 op=UNLOAD Jan 21 00:59:58.788000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd8430c70 a2=94 a3=54428f items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.788000 audit: BPF prog-id=268 op=LOAD Jan 21 00:59:58.788000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd8430ca0 a2=94 a3=2 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.788000 audit: BPF prog-id=268 op=UNLOAD Jan 21 00:59:58.788000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd8430ca0 a2=0 a3=2 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.811841 systemd-networkd[2103]: calia2ef8df133c: Gained IPv6LL Jan 21 00:59:58.880291 containerd[2473]: time="2026-01-21T00:59:58.880247463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-d2xs2,Uid:346360e9-6dd0-47dd-8091-663997b6e137,Namespace:calico-system,Attempt:0,}" Jan 21 00:59:58.968000 audit: BPF prog-id=269 op=LOAD Jan 21 00:59:58.968000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd8430b60 a2=94 a3=1 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.968000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.968000 audit: BPF prog-id=269 op=UNLOAD Jan 21 00:59:58.968000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd8430b60 a2=94 a3=1 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.968000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.983630 systemd-networkd[2103]: cali4e45ca519d6: Link UP Jan 21 00:59:58.984561 systemd-networkd[2103]: cali4e45ca519d6: Gained carrier Jan 21 00:59:58.989000 audit: BPF prog-id=270 op=LOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd8430b50 a2=94 a3=4 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.989000 audit: BPF prog-id=270 op=UNLOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd8430b50 a2=0 a3=4 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.989000 audit: BPF prog-id=271 op=LOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd84309b0 a2=94 a3=5 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.989000 audit: BPF prog-id=271 op=UNLOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd84309b0 a2=0 a3=5 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.989000 audit: BPF prog-id=272 op=LOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd8430bd0 a2=94 a3=6 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.989000 audit: BPF prog-id=272 op=UNLOAD Jan 21 00:59:58.989000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd8430bd0 a2=0 a3=6 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.991000 audit: BPF prog-id=273 op=LOAD Jan 21 00:59:58.991000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd8430380 a2=94 a3=88 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.991000 audit: BPF prog-id=274 op=LOAD Jan 21 00:59:58.991000 audit[5878]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcd8430200 a2=94 a3=2 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.991000 audit: BPF prog-id=274 op=UNLOAD Jan 21 00:59:58.991000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcd8430230 a2=0 a3=7ffcd8430330 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.991000 audit: BPF prog-id=273 op=UNLOAD Jan 21 00:59:58.991000 audit[5878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2c4b8d10 a2=0 a3=d5362e0abd6aa085 items=0 ppid=5610 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:59:58.996000 audit: BPF prog-id=265 op=UNLOAD Jan 21 00:59:58.996000 audit[5610]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000402240 a2=0 a3=0 items=0 ppid=5057 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:58.996000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.922 [INFO][5881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0 goldmane-666569f655- calico-system 346360e9-6dd0-47dd-8091-663997b6e137 855 0 2026-01-21 00:59:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-n-ed178c4493 goldmane-666569f655-d2xs2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4e45ca519d6 [] [] }} ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.922 [INFO][5881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.945 [INFO][5893] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" HandleID="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Workload="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.946 [INFO][5893] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" HandleID="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Workload="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-n-ed178c4493", "pod":"goldmane-666569f655-d2xs2", "timestamp":"2026-01-21 00:59:58.945909986 +0000 UTC"}, Hostname:"ci-4547.0.0-n-ed178c4493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.946 [INFO][5893] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.946 [INFO][5893] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.946 [INFO][5893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-n-ed178c4493' Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.952 [INFO][5893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.955 [INFO][5893] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.958 [INFO][5893] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.960 [INFO][5893] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.962 [INFO][5893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.962 [INFO][5893] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.964 [INFO][5893] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8 Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.971 [INFO][5893] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.980 [INFO][5893] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.136/26] block=192.168.70.128/26 handle="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.980 [INFO][5893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.136/26] handle="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" host="ci-4547.0.0-n-ed178c4493" Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.980 [INFO][5893] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:59:59.003895 containerd[2473]: 2026-01-21 00:59:58.980 [INFO][5893] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.136/26] IPv6=[] ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" HandleID="k8s-pod-network.1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Workload="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:58.981 [INFO][5881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"346360e9-6dd0-47dd-8091-663997b6e137", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"", Pod:"goldmane-666569f655-d2xs2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e45ca519d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:58.981 [INFO][5881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.136/32] ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:58.981 [INFO][5881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e45ca519d6 ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:58.983 [INFO][5881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:58.983 [INFO][5881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"346360e9-6dd0-47dd-8091-663997b6e137", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 59, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-n-ed178c4493", ContainerID:"1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8", Pod:"goldmane-666569f655-d2xs2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e45ca519d6", MAC:"c2:94:75:70:17:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:59:59.004526 containerd[2473]: 2026-01-21 00:59:59.002 [INFO][5881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" Namespace="calico-system" Pod="goldmane-666569f655-d2xs2" WorkloadEndpoint="ci--4547.0.0--n--ed178c4493-k8s-goldmane--666569f655--d2xs2-eth0" Jan 21 00:59:59.046283 containerd[2473]: time="2026-01-21T00:59:59.046213926Z" level=info msg="connecting to shim 1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8" address="unix:///run/containerd/s/9313e200da9bbff7d4764a8f33f59b48094d66a17eb702c2f2b7c4dc17116aca" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:59:59.058976 kubelet[3975]: E0121 00:59:59.058942 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 00:59:59.073268 systemd[1]: Started cri-containerd-1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8.scope - libcontainer container 1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8. Jan 21 00:59:59.079789 kubelet[3975]: E0121 00:59:59.077696 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 00:59:59.101000 audit: BPF prog-id=275 op=LOAD Jan 21 00:59:59.102000 audit: BPF prog-id=276 op=LOAD Jan 21 00:59:59.102000 audit[5943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.102000 audit: BPF prog-id=276 op=UNLOAD Jan 21 00:59:59.102000 audit[5943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.102000 audit: BPF prog-id=277 op=LOAD Jan 21 00:59:59.102000 audit[5943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.103000 audit: BPF prog-id=278 op=LOAD Jan 21 00:59:59.103000 audit[5943]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.103000 audit: BPF prog-id=278 op=UNLOAD Jan 21 00:59:59.103000 audit[5943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.103000 audit: BPF prog-id=277 op=UNLOAD Jan 21 00:59:59.103000 audit[5943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.103000 audit: BPF prog-id=279 op=LOAD Jan 21 00:59:59.103000 audit[5943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5932 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165326262376661306235353631333966373261346630396261323730 Jan 21 00:59:59.140000 audit[5965]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:59.140000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd52751170 a2=0 a3=7ffd5275115c items=0 ppid=4132 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:59.144000 audit[5965]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:59:59.144000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd52751170 a2=0 a3=0 items=0 ppid=4132 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:59:59.148436 kubelet[3975]: I0121 00:59:59.148395 3975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rptg9" podStartSLOduration=42.148383021 podStartE2EDuration="42.148383021s" podCreationTimestamp="2026-01-21 00:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:59:59.147887475 +0000 UTC m=+47.360616045" watchObservedRunningTime="2026-01-21 00:59:59.148383021 +0000 UTC m=+47.361111605" Jan 21 00:59:59.175696 containerd[2473]: time="2026-01-21T00:59:59.175662311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-d2xs2,Uid:346360e9-6dd0-47dd-8091-663997b6e137,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e2bb7fa0b556139f72a4f09ba270d8e8b1e0854b4b176db2711b300ed75ddd8\"" Jan 21 00:59:59.177021 containerd[2473]: time="2026-01-21T00:59:59.176849582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 00:59:59.193000 audit[5980]: NETFILTER_CFG table=nat:128 family=2 entries=15 op=nft_register_chain pid=5980 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:59:59.193000 audit[5980]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc7be74d30 a2=0 a3=7ffc7be74d1c items=0 ppid=5610 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.193000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:59:59.194000 audit[5981]: NETFILTER_CFG table=mangle:129 family=2 entries=16 op=nft_register_chain pid=5981 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:59:59.194000 audit[5981]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe7682f180 a2=0 a3=7ffe7682f16c items=0 ppid=5610 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.194000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:59:59.228000 audit[5979]: NETFILTER_CFG table=raw:130 family=2 entries=21 op=nft_register_chain pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:59:59.228000 audit[5979]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffdcd65a00 a2=0 a3=7fffdcd659ec items=0 ppid=5610 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.228000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:59:59.230000 audit[5984]: NETFILTER_CFG table=filter:131 family=2 entries=285 op=nft_register_chain pid=5984 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:59:59.230000 audit[5984]: SYSCALL arch=c000003e syscall=46 success=yes exit=168336 a0=3 a1=7ffe7cd82100 a2=0 a3=55b4fc867000 items=0 ppid=5610 pid=5984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.230000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:59:59.304000 audit[5994]: NETFILTER_CFG table=filter:132 family=2 entries=64 op=nft_register_chain pid=5994 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:59:59.304000 audit[5994]: SYSCALL arch=c000003e syscall=46 success=yes exit=31104 a0=3 a1=7ffe5fd67b80 a2=0 a3=7ffe5fd67b6c items=0 ppid=5610 pid=5994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:59:59.304000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:59:59.387967 systemd-networkd[2103]: califd5f2e65518: Gained IPv6LL Jan 21 00:59:59.419804 containerd[2473]: time="2026-01-21T00:59:59.419713737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:59.424438 containerd[2473]: time="2026-01-21T00:59:59.424400316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 00:59:59.424511 containerd[2473]: time="2026-01-21T00:59:59.424471956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:59.424648 kubelet[3975]: E0121 00:59:59.424617 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:59.424702 kubelet[3975]: E0121 00:59:59.424661 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:59.425068 kubelet[3975]: E0121 00:59:59.424823 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqt6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:59.426214 kubelet[3975]: E0121 00:59:59.426163 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 00:59:59.857441 kubelet[3975]: I0121 00:59:59.857397 3975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:59:59.899998 systemd-networkd[2103]: calic99135bd72c: Gained IPv6LL Jan 21 01:00:00.071082 kubelet[3975]: E0121 01:00:00.070728 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:00:00.071680 kubelet[3975]: E0121 01:00:00.071647 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:00.127000 audit[6048]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=6048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:00.127000 audit[6048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5e02c4d0 a2=0 a3=7fff5e02c4bc items=0 ppid=4132 pid=6048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:00.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:00.159000 audit[6048]: NETFILTER_CFG table=nat:134 family=2 entries=47 op=nft_register_chain pid=6048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:00.159000 audit[6048]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff5e02c4d0 a2=0 a3=7fff5e02c4bc items=0 ppid=4132 pid=6048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:00.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:00.412118 systemd-networkd[2103]: cali4e45ca519d6: Gained IPv6LL Jan 21 01:00:00.795990 systemd-networkd[2103]: vxlan.calico: Gained IPv6LL Jan 21 01:00:01.074914 kubelet[3975]: E0121 01:00:01.074803 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:06.881216 containerd[2473]: time="2026-01-21T01:00:06.880921249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:00:07.650172 containerd[2473]: time="2026-01-21T01:00:07.650122621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:07.652478 containerd[2473]: time="2026-01-21T01:00:07.652452874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:00:07.652551 containerd[2473]: time="2026-01-21T01:00:07.652527802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:07.652691 kubelet[3975]: E0121 01:00:07.652661 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:07.653025 kubelet[3975]: E0121 01:00:07.652709 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:07.653025 kubelet[3975]: E0121 01:00:07.652853 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:057c1f4bca9b49eb960e9b36fddec5b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:07.655390 containerd[2473]: time="2026-01-21T01:00:07.655362082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:00:08.418960 containerd[2473]: time="2026-01-21T01:00:08.418920411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:08.421901 containerd[2473]: time="2026-01-21T01:00:08.421871918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:00:08.422007 containerd[2473]: time="2026-01-21T01:00:08.421876000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:08.422103 kubelet[3975]: E0121 01:00:08.422061 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:08.422151 kubelet[3975]: E0121 01:00:08.422117 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:08.422257 kubelet[3975]: E0121 01:00:08.422233 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:08.423633 kubelet[3975]: E0121 01:00:08.423569 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:00:09.881629 containerd[2473]: time="2026-01-21T01:00:09.881587090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:00:10.562917 containerd[2473]: time="2026-01-21T01:00:10.562874803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:10.566546 containerd[2473]: time="2026-01-21T01:00:10.566519690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:00:10.566629 containerd[2473]: time="2026-01-21T01:00:10.566568023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:10.566756 kubelet[3975]: E0121 01:00:10.566719 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:10.567266 kubelet[3975]: E0121 01:00:10.566790 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:10.567297 containerd[2473]: time="2026-01-21T01:00:10.567164307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:10.567448 kubelet[3975]: E0121 01:00:10.567025 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:11.271927 containerd[2473]: time="2026-01-21T01:00:11.271869188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:11.274316 containerd[2473]: time="2026-01-21T01:00:11.274285362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:11.274423 containerd[2473]: time="2026-01-21T01:00:11.274357572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:11.274567 kubelet[3975]: E0121 01:00:11.274535 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:11.274624 kubelet[3975]: E0121 01:00:11.274577 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:11.275063 containerd[2473]: time="2026-01-21T01:00:11.274870685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:00:11.275135 kubelet[3975]: E0121 01:00:11.275036 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9bxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:11.276427 kubelet[3975]: E0121 01:00:11.276393 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:00:11.902289 containerd[2473]: time="2026-01-21T01:00:11.902241419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:11.904550 containerd[2473]: time="2026-01-21T01:00:11.904523471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:00:11.904588 containerd[2473]: time="2026-01-21T01:00:11.904536045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:11.904693 kubelet[3975]: E0121 01:00:11.904664 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:11.904959 kubelet[3975]: E0121 01:00:11.904707 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:11.904983 kubelet[3975]: E0121 01:00:11.904949 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:11.905407 containerd[2473]: time="2026-01-21T01:00:11.905230849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:00:11.906906 kubelet[3975]: E0121 01:00:11.906852 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:00:12.739623 containerd[2473]: time="2026-01-21T01:00:12.739574116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:12.741796 containerd[2473]: time="2026-01-21T01:00:12.741754624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:00:12.741857 containerd[2473]: time="2026-01-21T01:00:12.741849169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:12.742040 kubelet[3975]: E0121 01:00:12.742004 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:12.742105 kubelet[3975]: E0121 01:00:12.742061 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:12.742309 kubelet[3975]: E0121 01:00:12.742272 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqt6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:12.743080 containerd[2473]: time="2026-01-21T01:00:12.742838577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:12.743416 kubelet[3975]: E0121 01:00:12.743392 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:13.879057 containerd[2473]: time="2026-01-21T01:00:13.879006185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:13.881790 containerd[2473]: time="2026-01-21T01:00:13.881699365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:13.881912 containerd[2473]: time="2026-01-21T01:00:13.881765876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:13.882143 kubelet[3975]: E0121 01:00:13.882109 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:13.882397 kubelet[3975]: E0121 01:00:13.882148 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:13.882397 kubelet[3975]: E0121 01:00:13.882339 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxnf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:13.882967 containerd[2473]: time="2026-01-21T01:00:13.882939056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:00:13.884376 kubelet[3975]: E0121 01:00:13.884341 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:00:14.627565 containerd[2473]: time="2026-01-21T01:00:14.627519745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:14.631351 containerd[2473]: time="2026-01-21T01:00:14.631321773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:00:14.631471 containerd[2473]: time="2026-01-21T01:00:14.631324816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:14.631553 kubelet[3975]: E0121 01:00:14.631509 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:14.631602 kubelet[3975]: E0121 01:00:14.631567 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:14.631743 kubelet[3975]: E0121 01:00:14.631697 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:14.632888 kubelet[3975]: E0121 01:00:14.632855 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:00:19.882731 kubelet[3975]: E0121 01:00:19.882681 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:00:21.884098 kubelet[3975]: E0121 01:00:21.883074 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:00:25.882216 kubelet[3975]: E0121 01:00:25.881856 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:26.881817 kubelet[3975]: E0121 01:00:26.881742 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:00:27.882249 kubelet[3975]: E0121 01:00:27.882145 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:00:29.885723 kubelet[3975]: E0121 01:00:29.885015 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:00:34.883799 containerd[2473]: time="2026-01-21T01:00:34.883596436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:00:35.134378 containerd[2473]: time="2026-01-21T01:00:35.134241516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:35.137122 containerd[2473]: time="2026-01-21T01:00:35.137060472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:00:35.137412 containerd[2473]: time="2026-01-21T01:00:35.137206124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:35.137558 kubelet[3975]: E0121 01:00:35.137466 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:35.138080 kubelet[3975]: E0121 01:00:35.137608 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:35.138359 kubelet[3975]: E0121 01:00:35.138304 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:057c1f4bca9b49eb960e9b36fddec5b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:35.140713 containerd[2473]: time="2026-01-21T01:00:35.140632267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:00:35.422485 containerd[2473]: time="2026-01-21T01:00:35.422116870Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:35.427647 containerd[2473]: time="2026-01-21T01:00:35.427601379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:00:35.427728 containerd[2473]: time="2026-01-21T01:00:35.427677770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:35.427844 kubelet[3975]: E0121 01:00:35.427811 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:35.427909 kubelet[3975]: E0121 01:00:35.427860 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:35.428035 kubelet[3975]: E0121 01:00:35.427996 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:35.429388 kubelet[3975]: E0121 01:00:35.429319 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:00:36.881707 containerd[2473]: time="2026-01-21T01:00:36.881657594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:37.147477 containerd[2473]: time="2026-01-21T01:00:37.147366466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:37.149682 containerd[2473]: time="2026-01-21T01:00:37.149648221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:37.149804 containerd[2473]: time="2026-01-21T01:00:37.149705005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:37.149851 kubelet[3975]: E0121 01:00:37.149818 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:37.150167 kubelet[3975]: E0121 01:00:37.149861 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:37.150167 kubelet[3975]: E0121 01:00:37.149999 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9bxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:37.151459 kubelet[3975]: E0121 01:00:37.151414 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:00:38.884484 containerd[2473]: time="2026-01-21T01:00:38.883181694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:39.136576 containerd[2473]: time="2026-01-21T01:00:39.136441198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:39.138784 containerd[2473]: time="2026-01-21T01:00:39.138733942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:39.138865 containerd[2473]: time="2026-01-21T01:00:39.138745592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:39.138964 kubelet[3975]: E0121 01:00:39.138926 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:39.139333 kubelet[3975]: E0121 01:00:39.138972 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:39.139361 containerd[2473]: time="2026-01-21T01:00:39.139341266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:00:39.139592 kubelet[3975]: E0121 01:00:39.139465 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxnf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:39.140727 kubelet[3975]: E0121 01:00:39.140695 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:00:39.406828 containerd[2473]: time="2026-01-21T01:00:39.406589236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:39.409048 containerd[2473]: time="2026-01-21T01:00:39.408959281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:00:39.410503 containerd[2473]: time="2026-01-21T01:00:39.409789484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:39.410598 kubelet[3975]: E0121 01:00:39.409969 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:39.410598 kubelet[3975]: E0121 01:00:39.410014 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:39.410598 kubelet[3975]: E0121 01:00:39.410158 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:39.413388 containerd[2473]: time="2026-01-21T01:00:39.413200591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:00:39.651650 containerd[2473]: time="2026-01-21T01:00:39.651607164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:39.655370 containerd[2473]: time="2026-01-21T01:00:39.655322059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:00:39.655468 containerd[2473]: time="2026-01-21T01:00:39.655425455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:39.655607 kubelet[3975]: E0121 01:00:39.655573 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:39.655660 kubelet[3975]: E0121 01:00:39.655624 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:39.655812 kubelet[3975]: E0121 01:00:39.655754 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:39.657342 kubelet[3975]: E0121 01:00:39.657246 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:00:39.884440 containerd[2473]: time="2026-01-21T01:00:39.884395625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:00:40.147046 containerd[2473]: time="2026-01-21T01:00:40.146992699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:40.150540 containerd[2473]: time="2026-01-21T01:00:40.150495554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:00:40.150652 containerd[2473]: time="2026-01-21T01:00:40.150561884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:40.150753 kubelet[3975]: E0121 01:00:40.150683 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:40.151006 kubelet[3975]: E0121 01:00:40.150764 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:40.151678 kubelet[3975]: E0121 01:00:40.151249 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqt6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:40.152681 kubelet[3975]: E0121 01:00:40.152633 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:43.883001 containerd[2473]: time="2026-01-21T01:00:43.882957268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:00:44.151543 containerd[2473]: time="2026-01-21T01:00:44.151418640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:44.153963 containerd[2473]: time="2026-01-21T01:00:44.153934010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:00:44.154081 containerd[2473]: time="2026-01-21T01:00:44.153991948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:44.154126 kubelet[3975]: E0121 01:00:44.154074 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:44.154440 kubelet[3975]: E0121 01:00:44.154131 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:44.154638 kubelet[3975]: E0121 01:00:44.154585 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:44.156146 kubelet[3975]: E0121 01:00:44.155747 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:00:47.882385 kubelet[3975]: E0121 01:00:47.881646 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:00:47.884392 kubelet[3975]: E0121 01:00:47.884328 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:00:50.883289 kubelet[3975]: E0121 01:00:50.883090 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:00:50.883289 kubelet[3975]: E0121 01:00:50.883231 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:00:54.881541 kubelet[3975]: E0121 01:00:54.881480 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:00:55.884576 kubelet[3975]: E0121 01:00:55.883985 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:00:56.113405 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:35586.service - OpenSSH per-connection server daemon (10.200.16.10:35586). Jan 21 01:00:56.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.39:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:56.114284 kernel: kauditd_printk_skb: 391 callbacks suppressed Jan 21 01:00:56.114326 kernel: audit: type=1130 audit(1768957256.112:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.39:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:56.704000 audit[6126]: USER_ACCT pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.709681 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:56.710628 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 35586 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:00:56.711110 kernel: audit: type=1101 audit(1768957256.704:760): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.711159 kernel: audit: type=1103 audit(1768957256.707:761): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.707000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.716190 systemd-logind[2449]: New session 11 of user core. Jan 21 01:00:56.719659 kernel: audit: type=1006 audit(1768957256.707:762): pid=6126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 21 01:00:56.725802 kernel: audit: type=1300 audit(1768957256.707:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68f6bf60 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.707000 audit[6126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68f6bf60 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.727912 kernel: audit: type=1327 audit(1768957256.707:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:56.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:56.728187 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 01:00:56.731000 audit[6126]: USER_START pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.739812 kernel: audit: type=1105 audit(1768957256.731:763): pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.739000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:56.744848 kernel: audit: type=1103 audit(1768957256.739:764): pid=6130 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:57.099356 sshd[6130]: Connection closed by 10.200.16.10 port 35586 Jan 21 01:00:57.099943 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:57.101000 audit[6126]: USER_END pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:57.104921 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:35586.service: Deactivated successfully. Jan 21 01:00:57.107514 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 01:00:57.110724 systemd-logind[2449]: Session 11 logged out. Waiting for processes to exit. Jan 21 01:00:57.101000 audit[6126]: CRED_DISP pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:57.111758 systemd-logind[2449]: Removed session 11. Jan 21 01:00:57.115059 kernel: audit: type=1106 audit(1768957257.101:765): pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:57.115137 kernel: audit: type=1104 audit(1768957257.101:766): pid=6126 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:00:57.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.39:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:01.887354 kubelet[3975]: E0121 01:01:01.887309 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:01:01.887940 kubelet[3975]: E0121 01:01:01.887408 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:01:02.231305 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:02.231421 kernel: audit: type=1130 audit(1768957262.228:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.39:22-10.200.16.10:45630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:02.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.39:22-10.200.16.10:45630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:02.229056 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:45630.service - OpenSSH per-connection server daemon (10.200.16.10:45630). Jan 21 01:01:02.858000 audit[6168]: USER_ACCT pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.863793 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 45630 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:02.865000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.867698 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:02.871938 kernel: audit: type=1101 audit(1768957262.858:769): pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.872017 kernel: audit: type=1103 audit(1768957262.865:770): pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.886423 kernel: audit: type=1006 audit(1768957262.865:771): pid=6168 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 21 01:01:02.886495 kernel: audit: type=1300 audit(1768957262.865:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf43be560 a2=3 a3=0 items=0 ppid=1 pid=6168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:02.865000 audit[6168]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf43be560 a2=3 a3=0 items=0 ppid=1 pid=6168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:02.878858 systemd-logind[2449]: New session 12 of user core. Jan 21 01:01:02.887626 kubelet[3975]: E0121 01:01:02.887020 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:01:02.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:02.890821 kernel: audit: type=1327 audit(1768957262.865:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:02.890860 kubelet[3975]: E0121 01:01:02.888931 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:01:02.888088 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 01:01:02.893000 audit[6168]: USER_START pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.903750 kernel: audit: type=1105 audit(1768957262.893:772): pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.903944 kernel: audit: type=1103 audit(1768957262.893:773): pid=6172 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:02.893000 audit[6172]: CRED_ACQ pid=6172 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:03.268574 sshd[6172]: Connection closed by 10.200.16.10 port 45630 Jan 21 01:01:03.269692 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:03.279879 kernel: audit: type=1106 audit(1768957263.270:774): pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:03.270000 audit[6168]: USER_END pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:03.275730 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:45630.service: Deactivated successfully. Jan 21 01:01:03.270000 audit[6168]: CRED_DISP pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:03.284796 kernel: audit: type=1104 audit(1768957263.270:775): pid=6168 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:03.283708 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 01:01:03.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.39:22-10.200.16.10:45630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:03.285510 systemd-logind[2449]: Session 12 logged out. Waiting for processes to exit. Jan 21 01:01:03.287712 systemd-logind[2449]: Removed session 12. Jan 21 01:01:08.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.39:22-10.200.16.10:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:08.404082 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:45632.service - OpenSSH per-connection server daemon (10.200.16.10:45632). Jan 21 01:01:08.411819 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:08.411917 kernel: audit: type=1130 audit(1768957268.403:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.39:22-10.200.16.10:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:09.018000 audit[6185]: USER_ACCT pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.020376 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 45632 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:09.023754 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:09.024823 kernel: audit: type=1101 audit(1768957269.018:778): pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.022000 audit[6185]: CRED_ACQ pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.032870 kernel: audit: type=1103 audit(1768957269.022:779): pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.034843 systemd-logind[2449]: New session 13 of user core. Jan 21 01:01:09.041392 kernel: audit: type=1006 audit(1768957269.022:780): pid=6185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 21 01:01:09.041464 kernel: audit: type=1300 audit(1768957269.022:780): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb73da760 a2=3 a3=0 items=0 ppid=1 pid=6185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:09.022000 audit[6185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb73da760 a2=3 a3=0 items=0 ppid=1 pid=6185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:09.043559 kernel: audit: type=1327 audit(1768957269.022:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:09.022000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:09.045998 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 01:01:09.047000 audit[6185]: USER_START pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.053000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.057847 kernel: audit: type=1105 audit(1768957269.047:781): pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.057926 kernel: audit: type=1103 audit(1768957269.053:782): pid=6189 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.439377 sshd[6189]: Connection closed by 10.200.16.10 port 45632 Jan 21 01:01:09.440933 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:09.441000 audit[6185]: USER_END pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.445212 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:45632.service: Deactivated successfully. Jan 21 01:01:09.448017 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 01:01:09.451197 systemd-logind[2449]: Session 13 logged out. Waiting for processes to exit. Jan 21 01:01:09.451936 kernel: audit: type=1106 audit(1768957269.441:783): pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.441000 audit[6185]: CRED_DISP pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.452728 systemd-logind[2449]: Removed session 13. Jan 21 01:01:09.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.39:22-10.200.16.10:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:09.458792 kernel: audit: type=1104 audit(1768957269.441:784): pid=6185 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:09.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.39:22-10.200.16.10:49624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:09.562197 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:49624.service - OpenSSH per-connection server daemon (10.200.16.10:49624). Jan 21 01:01:09.884885 kubelet[3975]: E0121 01:01:09.884067 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:01:10.157000 audit[6202]: USER_ACCT pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.161793 sshd[6202]: Accepted publickey for core from 10.200.16.10 port 49624 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:10.161000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.162000 audit[6202]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3edf0ae0 a2=3 a3=0 items=0 ppid=1 pid=6202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:10.162000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:10.163715 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:10.173071 systemd-logind[2449]: New session 14 of user core. Jan 21 01:01:10.176967 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 01:01:10.180000 audit[6202]: USER_START pid=6202 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.182000 audit[6206]: CRED_ACQ pid=6206 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.589892 sshd[6206]: Connection closed by 10.200.16.10 port 49624 Jan 21 01:01:10.591400 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:10.591000 audit[6202]: USER_END pid=6202 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.591000 audit[6202]: CRED_DISP pid=6202 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:10.595412 systemd-logind[2449]: Session 14 logged out. Waiting for processes to exit. Jan 21 01:01:10.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.39:22-10.200.16.10:49624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:10.595570 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:49624.service: Deactivated successfully. Jan 21 01:01:10.597583 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 01:01:10.599682 systemd-logind[2449]: Removed session 14. Jan 21 01:01:10.716129 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:49630.service - OpenSSH per-connection server daemon (10.200.16.10:49630). Jan 21 01:01:10.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.39:22-10.200.16.10:49630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:10.881006 kubelet[3975]: E0121 01:01:10.880883 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:01:11.311000 audit[6216]: USER_ACCT pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.312323 sshd[6216]: Accepted publickey for core from 10.200.16.10 port 49630 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:11.313000 audit[6216]: CRED_ACQ pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.313000 audit[6216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd68549640 a2=3 a3=0 items=0 ppid=1 pid=6216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:11.313000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:11.317741 sshd-session[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:11.324670 systemd-logind[2449]: New session 15 of user core. Jan 21 01:01:11.332954 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 01:01:11.336000 audit[6216]: USER_START pid=6216 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.337000 audit[6220]: CRED_ACQ pid=6220 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.727820 sshd[6220]: Connection closed by 10.200.16.10 port 49630 Jan 21 01:01:11.728987 sshd-session[6216]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:11.729000 audit[6216]: USER_END pid=6216 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.730000 audit[6216]: CRED_DISP pid=6216 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:11.734289 systemd-logind[2449]: Session 15 logged out. Waiting for processes to exit. Jan 21 01:01:11.734842 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:49630.service: Deactivated successfully. Jan 21 01:01:11.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.39:22-10.200.16.10:49630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:11.738987 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 01:01:11.745253 systemd-logind[2449]: Removed session 15. Jan 21 01:01:14.882935 kubelet[3975]: E0121 01:01:14.882892 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:01:14.883532 kubelet[3975]: E0121 01:01:14.883501 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:01:15.881277 containerd[2473]: time="2026-01-21T01:01:15.881233245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:01:16.162032 containerd[2473]: time="2026-01-21T01:01:16.161918522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:16.164351 containerd[2473]: time="2026-01-21T01:01:16.164320162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:01:16.164423 containerd[2473]: time="2026-01-21T01:01:16.164341849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:16.164540 kubelet[3975]: E0121 01:01:16.164511 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:01:16.164974 kubelet[3975]: E0121 01:01:16.164549 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:01:16.164974 kubelet[3975]: E0121 01:01:16.164677 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:057c1f4bca9b49eb960e9b36fddec5b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:16.166973 containerd[2473]: time="2026-01-21T01:01:16.166945867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:01:16.448080 containerd[2473]: time="2026-01-21T01:01:16.447865503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:16.450417 containerd[2473]: time="2026-01-21T01:01:16.450209744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:01:16.450417 containerd[2473]: time="2026-01-21T01:01:16.450301077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:16.450541 kubelet[3975]: E0121 01:01:16.450441 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:01:16.450541 kubelet[3975]: E0121 01:01:16.450491 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:01:16.450693 kubelet[3975]: E0121 01:01:16.450622 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f69864b5-f6kfk_calico-system(583e9439-b173-47b8-8158-974665ab3f14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:16.452264 kubelet[3975]: E0121 01:01:16.452224 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:01:16.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.39:22-10.200.16.10:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:16.851037 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:49646.service - OpenSSH per-connection server daemon (10.200.16.10:49646). Jan 21 01:01:16.862805 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 01:01:16.862885 kernel: audit: type=1130 audit(1768957276.850:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.39:22-10.200.16.10:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:16.881347 kubelet[3975]: E0121 01:01:16.881309 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:01:17.462000 audit[6238]: USER_ACCT pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.463524 sshd[6238]: Accepted publickey for core from 10.200.16.10 port 49646 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:17.467945 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:17.473179 kernel: audit: type=1101 audit(1768957277.462:805): pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.473245 kernel: audit: type=1103 audit(1768957277.466:806): pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.466000 audit[6238]: CRED_ACQ pid=6238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.476139 kernel: audit: type=1006 audit(1768957277.466:807): pid=6238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 01:01:17.476855 kernel: audit: type=1300 audit(1768957277.466:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3b301420 a2=3 a3=0 items=0 ppid=1 pid=6238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:17.466000 audit[6238]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3b301420 a2=3 a3=0 items=0 ppid=1 pid=6238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:17.466000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:17.483970 kernel: audit: type=1327 audit(1768957277.466:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:17.487189 systemd-logind[2449]: New session 16 of user core. Jan 21 01:01:17.494103 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 01:01:17.495000 audit[6238]: USER_START pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.501000 audit[6242]: CRED_ACQ pid=6242 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.505027 kernel: audit: type=1105 audit(1768957277.495:808): pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.505080 kernel: audit: type=1103 audit(1768957277.501:809): pid=6242 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.856228 sshd[6242]: Connection closed by 10.200.16.10 port 49646 Jan 21 01:01:17.854749 sshd-session[6238]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:17.875537 kernel: audit: type=1106 audit(1768957277.855:810): pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.855000 audit[6238]: USER_END pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.876267 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:49646.service: Deactivated successfully. Jan 21 01:01:17.878765 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 01:01:17.881325 systemd-logind[2449]: Session 16 logged out. Waiting for processes to exit. Jan 21 01:01:17.883963 systemd-logind[2449]: Removed session 16. Jan 21 01:01:17.855000 audit[6238]: CRED_DISP pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:17.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.39:22-10.200.16.10:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:17.890833 kernel: audit: type=1104 audit(1768957277.855:811): pid=6238 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:22.880821 kubelet[3975]: E0121 01:01:22.880759 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:01:22.881545 containerd[2473]: time="2026-01-21T01:01:22.881491934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:01:22.987156 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:22.987402 kernel: audit: type=1130 audit(1768957282.983:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.39:22-10.200.16.10:35344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:22.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.39:22-10.200.16.10:35344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:22.984192 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:35344.service - OpenSSH per-connection server daemon (10.200.16.10:35344). Jan 21 01:01:23.133268 containerd[2473]: time="2026-01-21T01:01:23.133150554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:23.136139 containerd[2473]: time="2026-01-21T01:01:23.136030199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:01:23.136139 containerd[2473]: time="2026-01-21T01:01:23.136071716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:23.136464 kubelet[3975]: E0121 01:01:23.136433 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:01:23.136613 kubelet[3975]: E0121 01:01:23.136556 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:01:23.137295 kubelet[3975]: E0121 01:01:23.137241 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqt6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-d2xs2_calico-system(346360e9-6dd0-47dd-8091-663997b6e137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:23.138935 kubelet[3975]: E0121 01:01:23.138892 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:01:23.602000 audit[6264]: USER_ACCT pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.607528 sshd-session[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:23.608273 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 35344 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:23.609463 kernel: audit: type=1101 audit(1768957283.602:814): pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.609523 kernel: audit: type=1103 audit(1768957283.605:815): pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.605000 audit[6264]: CRED_ACQ pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.618132 kernel: audit: type=1006 audit(1768957283.605:816): pid=6264 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 21 01:01:23.616378 systemd-logind[2449]: New session 17 of user core. Jan 21 01:01:23.605000 audit[6264]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc76df990 a2=3 a3=0 items=0 ppid=1 pid=6264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:23.625594 kernel: audit: type=1300 audit(1768957283.605:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc76df990 a2=3 a3=0 items=0 ppid=1 pid=6264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:23.605000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:23.627953 kernel: audit: type=1327 audit(1768957283.605:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:23.629275 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 01:01:23.631000 audit[6264]: USER_START pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.631000 audit[6268]: CRED_ACQ pid=6268 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.643478 kernel: audit: type=1105 audit(1768957283.631:817): pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.643544 kernel: audit: type=1103 audit(1768957283.631:818): pid=6268 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:23.995839 sshd[6268]: Connection closed by 10.200.16.10 port 35344 Jan 21 01:01:23.997008 sshd-session[6264]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:23.997000 audit[6264]: USER_END pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:24.001155 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:35344.service: Deactivated successfully. Jan 21 01:01:24.003104 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 01:01:24.005749 systemd-logind[2449]: Session 17 logged out. Waiting for processes to exit. Jan 21 01:01:24.006721 systemd-logind[2449]: Removed session 17. Jan 21 01:01:23.997000 audit[6264]: CRED_DISP pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:24.011408 kernel: audit: type=1106 audit(1768957283.997:819): pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:24.011466 kernel: audit: type=1104 audit(1768957283.997:820): pid=6264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:24.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.39:22-10.200.16.10:35344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:28.882091 kubelet[3975]: E0121 01:01:28.882041 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:01:29.131764 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:29.131898 kernel: audit: type=1130 audit(1768957289.120:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.39:22-10.200.16.10:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:29.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.39:22-10.200.16.10:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:29.121756 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:35360.service - OpenSSH per-connection server daemon (10.200.16.10:35360). Jan 21 01:01:29.742000 audit[6280]: USER_ACCT pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.745695 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:29.747222 sshd[6280]: Accepted publickey for core from 10.200.16.10 port 35360 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:29.744000 audit[6280]: CRED_ACQ pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.752807 systemd-logind[2449]: New session 18 of user core. Jan 21 01:01:29.755557 kernel: audit: type=1101 audit(1768957289.742:823): pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.755634 kernel: audit: type=1103 audit(1768957289.744:824): pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.757992 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 01:01:29.760754 kernel: audit: type=1006 audit(1768957289.744:825): pid=6280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 21 01:01:29.744000 audit[6280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfe8cb7a0 a2=3 a3=0 items=0 ppid=1 pid=6280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:29.766273 kernel: audit: type=1300 audit(1768957289.744:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfe8cb7a0 a2=3 a3=0 items=0 ppid=1 pid=6280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:29.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:29.769383 kernel: audit: type=1327 audit(1768957289.744:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:29.761000 audit[6280]: USER_START pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.775981 kernel: audit: type=1105 audit(1768957289.761:826): pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.763000 audit[6284]: CRED_ACQ pid=6284 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.782519 kernel: audit: type=1103 audit(1768957289.763:827): pid=6284 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:29.891320 containerd[2473]: time="2026-01-21T01:01:29.890649676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:01:30.155542 sshd[6284]: Connection closed by 10.200.16.10 port 35360 Jan 21 01:01:30.156082 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:30.156000 audit[6280]: USER_END pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.160455 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:35360.service: Deactivated successfully. Jan 21 01:01:30.160685 systemd-logind[2449]: Session 18 logged out. Waiting for processes to exit. Jan 21 01:01:30.163593 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 01:01:30.156000 audit[6280]: CRED_DISP pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.167091 systemd-logind[2449]: Removed session 18. Jan 21 01:01:30.170905 kernel: audit: type=1106 audit(1768957290.156:828): pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.170971 kernel: audit: type=1104 audit(1768957290.156:829): pid=6280 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.39:22-10.200.16.10:35360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:30.174496 containerd[2473]: time="2026-01-21T01:01:30.173657552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.177014 containerd[2473]: time="2026-01-21T01:01:30.176965799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:01:30.177203 containerd[2473]: time="2026-01-21T01:01:30.177119290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.177569 kubelet[3975]: E0121 01:01:30.177389 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:30.177569 kubelet[3975]: E0121 01:01:30.177445 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:30.178392 kubelet[3975]: E0121 01:01:30.178009 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9bxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-6mvxn_calico-apiserver(52de65b6-e239-41f1-ad3a-143641236290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.179116 kubelet[3975]: E0121 01:01:30.179089 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:01:30.179428 containerd[2473]: time="2026-01-21T01:01:30.179379008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:01:30.279069 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:35138.service - OpenSSH per-connection server daemon (10.200.16.10:35138). Jan 21 01:01:30.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.39:22-10.200.16.10:35138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:30.438471 containerd[2473]: time="2026-01-21T01:01:30.438368509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.441129 containerd[2473]: time="2026-01-21T01:01:30.441079998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:01:30.441232 containerd[2473]: time="2026-01-21T01:01:30.441162292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.441343 kubelet[3975]: E0121 01:01:30.441291 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:01:30.441385 kubelet[3975]: E0121 01:01:30.441341 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:01:30.441631 kubelet[3975]: E0121 01:01:30.441574 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.441875 containerd[2473]: time="2026-01-21T01:01:30.441802090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:01:30.707388 containerd[2473]: time="2026-01-21T01:01:30.707274667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.711550 containerd[2473]: time="2026-01-21T01:01:30.711478721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:01:30.711550 containerd[2473]: time="2026-01-21T01:01:30.711514494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.711814 kubelet[3975]: E0121 01:01:30.711782 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:30.711872 kubelet[3975]: E0121 01:01:30.711830 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:30.712111 kubelet[3975]: E0121 01:01:30.712059 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxnf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bb965959-wpbx9_calico-apiserver(fa6a1068-061f-4c26-9e2c-97c6b3c762d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.712576 containerd[2473]: time="2026-01-21T01:01:30.712425960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:01:30.713876 kubelet[3975]: E0121 01:01:30.713838 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:01:30.881000 audit[6322]: USER_ACCT pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.882948 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 35138 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:30.884000 audit[6322]: CRED_ACQ pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.884000 audit[6322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff95ba5420 a2=3 a3=0 items=0 ppid=1 pid=6322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:30.884000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:30.885909 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:30.893132 systemd-logind[2449]: New session 19 of user core. Jan 21 01:01:30.900935 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 01:01:30.904000 audit[6322]: USER_START pid=6322 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.906000 audit[6326]: CRED_ACQ pid=6326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:30.947013 containerd[2473]: time="2026-01-21T01:01:30.946972115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.949698 containerd[2473]: time="2026-01-21T01:01:30.949650857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:01:30.949796 containerd[2473]: time="2026-01-21T01:01:30.949755267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.949970 kubelet[3975]: E0121 01:01:30.949939 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:01:30.950016 kubelet[3975]: E0121 01:01:30.949999 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:01:30.950180 kubelet[3975]: E0121 01:01:30.950143 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbwbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6b85f_calico-system(ce3bc266-4945-4335-b09f-5dc1a5736d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.951578 kubelet[3975]: E0121 01:01:30.951529 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:01:31.444653 sshd[6326]: Connection closed by 10.200.16.10 port 35138 Jan 21 01:01:31.445858 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:31.446000 audit[6322]: USER_END pid=6322 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:31.446000 audit[6322]: CRED_DISP pid=6322 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:31.449697 systemd-logind[2449]: Session 19 logged out. Waiting for processes to exit. Jan 21 01:01:31.449994 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:35138.service: Deactivated successfully. Jan 21 01:01:31.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.39:22-10.200.16.10:35138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:31.452430 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 01:01:31.454068 systemd-logind[2449]: Removed session 19. Jan 21 01:01:31.576857 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:35154.service - OpenSSH per-connection server daemon (10.200.16.10:35154). Jan 21 01:01:31.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.39:22-10.200.16.10:35154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:32.171000 audit[6336]: USER_ACCT pid=6336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:32.173727 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 35154 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:32.173000 audit[6336]: CRED_ACQ pid=6336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:32.173000 audit[6336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38905aa0 a2=3 a3=0 items=0 ppid=1 pid=6336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:32.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:32.175442 sshd-session[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:32.183129 systemd-logind[2449]: New session 20 of user core. Jan 21 01:01:32.186973 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 01:01:32.188000 audit[6336]: USER_START pid=6336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:32.190000 audit[6340]: CRED_ACQ pid=6340 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.120000 audit[6350]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=6350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:33.120000 audit[6350]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcb26c5f70 a2=0 a3=7ffcb26c5f5c items=0 ppid=4132 pid=6350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:33.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:33.131000 audit[6350]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=6350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:33.131000 audit[6350]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb26c5f70 a2=0 a3=0 items=0 ppid=4132 pid=6350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:33.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:33.146000 audit[6352]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=6352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:33.146000 audit[6352]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc46e3ea40 a2=0 a3=7ffc46e3ea2c items=0 ppid=4132 pid=6352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:33.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:33.151000 audit[6352]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=6352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:33.151000 audit[6352]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc46e3ea40 a2=0 a3=0 items=0 ppid=4132 pid=6352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:33.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:33.231232 sshd[6340]: Connection closed by 10.200.16.10 port 35154 Jan 21 01:01:33.233987 sshd-session[6336]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:33.234000 audit[6336]: USER_END pid=6336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.234000 audit[6336]: CRED_DISP pid=6336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.238914 systemd-logind[2449]: Session 20 logged out. Waiting for processes to exit. Jan 21 01:01:33.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.39:22-10.200.16.10:35154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:33.239533 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:35154.service: Deactivated successfully. Jan 21 01:01:33.242960 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 01:01:33.246005 systemd-logind[2449]: Removed session 20. Jan 21 01:01:33.353068 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:35160.service - OpenSSH per-connection server daemon (10.200.16.10:35160). Jan 21 01:01:33.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.39:22-10.200.16.10:35160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:33.884866 containerd[2473]: time="2026-01-21T01:01:33.883595620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:01:33.952000 audit[6357]: USER_ACCT pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.952951 sshd[6357]: Accepted publickey for core from 10.200.16.10 port 35160 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:33.952000 audit[6357]: CRED_ACQ pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.953000 audit[6357]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09c1ce20 a2=3 a3=0 items=0 ppid=1 pid=6357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:33.953000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:33.954468 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:33.959971 systemd-logind[2449]: New session 21 of user core. Jan 21 01:01:33.968949 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 01:01:33.972000 audit[6357]: USER_START pid=6357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:33.975000 audit[6361]: CRED_ACQ pid=6361 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:34.132429 containerd[2473]: time="2026-01-21T01:01:34.131750345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:34.135338 containerd[2473]: time="2026-01-21T01:01:34.134927647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:01:34.135338 containerd[2473]: time="2026-01-21T01:01:34.134950422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:34.135510 kubelet[3975]: E0121 01:01:34.135473 3975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:01:34.136287 kubelet[3975]: E0121 01:01:34.135517 3975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:01:34.136287 kubelet[3975]: E0121 01:01:34.135666 3975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-588547dc94-gdj8l_calico-system(f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:34.136878 kubelet[3975]: E0121 01:01:34.136843 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:01:34.553517 sshd[6361]: Connection closed by 10.200.16.10 port 35160 Jan 21 01:01:34.554490 sshd-session[6357]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:34.557821 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 21 01:01:34.557915 kernel: audit: type=1106 audit(1768957294.555:859): pid=6357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:34.555000 audit[6357]: USER_END pid=6357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:34.561229 systemd-logind[2449]: Session 21 logged out. Waiting for processes to exit. Jan 21 01:01:34.561762 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:35160.service: Deactivated successfully. Jan 21 01:01:34.565435 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 01:01:34.555000 audit[6357]: CRED_DISP pid=6357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:34.568631 systemd-logind[2449]: Removed session 21. Jan 21 01:01:34.576794 kernel: audit: type=1104 audit(1768957294.555:860): pid=6357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:34.576916 kernel: audit: type=1131 audit(1768957294.557:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.39:22-10.200.16.10:35160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:34.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.39:22-10.200.16.10:35160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:34.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.39:22-10.200.16.10:35168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:34.675250 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:35168.service - OpenSSH per-connection server daemon (10.200.16.10:35168). Jan 21 01:01:34.681805 kernel: audit: type=1130 audit(1768957294.674:862): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.39:22-10.200.16.10:35168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:34.881087 kubelet[3975]: E0121 01:01:34.881022 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:01:35.265000 audit[6370]: USER_ACCT pid=6370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.267154 sshd[6370]: Accepted publickey for core from 10.200.16.10 port 35168 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:35.272062 kernel: audit: type=1101 audit(1768957295.265:863): pid=6370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.272136 kernel: audit: type=1103 audit(1768957295.269:864): pid=6370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.269000 audit[6370]: CRED_ACQ pid=6370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.271939 sshd-session[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:35.286146 kernel: audit: type=1006 audit(1768957295.270:865): pid=6370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 21 01:01:35.286213 kernel: audit: type=1300 audit(1768957295.270:865): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd09773fc0 a2=3 a3=0 items=0 ppid=1 pid=6370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:35.270000 audit[6370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd09773fc0 a2=3 a3=0 items=0 ppid=1 pid=6370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:35.291238 systemd-logind[2449]: New session 22 of user core. Jan 21 01:01:35.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:35.296263 kernel: audit: type=1327 audit(1768957295.270:865): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:35.295519 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 01:01:35.299000 audit[6370]: USER_START pid=6370 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.307836 kernel: audit: type=1105 audit(1768957295.299:866): pid=6370 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.306000 audit[6388]: CRED_ACQ pid=6388 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.665312 sshd[6388]: Connection closed by 10.200.16.10 port 35168 Jan 21 01:01:35.665840 sshd-session[6370]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:35.668000 audit[6370]: USER_END pid=6370 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.668000 audit[6370]: CRED_DISP pid=6370 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:35.671702 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:35168.service: Deactivated successfully. Jan 21 01:01:35.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.39:22-10.200.16.10:35168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:35.675138 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 01:01:35.676828 systemd-logind[2449]: Session 22 logged out. Waiting for processes to exit. Jan 21 01:01:35.681457 systemd-logind[2449]: Removed session 22. Jan 21 01:01:38.488000 audit[6407]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=6407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:38.488000 audit[6407]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda9aa8040 a2=0 a3=7ffda9aa802c items=0 ppid=4132 pid=6407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:38.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:38.496000 audit[6407]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=6407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:38.496000 audit[6407]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffda9aa8040 a2=0 a3=7ffda9aa802c items=0 ppid=4132 pid=6407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:38.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:40.800432 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 21 01:01:40.800546 kernel: audit: type=1130 audit(1768957300.792:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.39:22-10.200.16.10:48862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:40.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.39:22-10.200.16.10:48862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:40.793571 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:48862.service - OpenSSH per-connection server daemon (10.200.16.10:48862). Jan 21 01:01:41.390000 audit[6409]: USER_ACCT pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.393228 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:41.395224 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 48862 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:41.391000 audit[6409]: CRED_ACQ pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.404127 kernel: audit: type=1101 audit(1768957301.390:874): pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.404194 kernel: audit: type=1103 audit(1768957301.391:875): pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.403909 systemd-logind[2449]: New session 23 of user core. Jan 21 01:01:41.391000 audit[6409]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbb3197d0 a2=3 a3=0 items=0 ppid=1 pid=6409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:41.409995 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 01:01:41.413079 kernel: audit: type=1006 audit(1768957301.391:876): pid=6409 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 01:01:41.413136 kernel: audit: type=1300 audit(1768957301.391:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbb3197d0 a2=3 a3=0 items=0 ppid=1 pid=6409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:41.416874 kernel: audit: type=1327 audit(1768957301.391:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:41.416928 kernel: audit: type=1105 audit(1768957301.415:877): pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.391000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:41.415000 audit[6409]: USER_START pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.417000 audit[6413]: CRED_ACQ pid=6413 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.427703 kernel: audit: type=1103 audit(1768957301.417:878): pid=6413 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.797742 sshd[6413]: Connection closed by 10.200.16.10 port 48862 Jan 21 01:01:41.798223 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:41.801000 audit[6409]: USER_END pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.805672 systemd-logind[2449]: Session 23 logged out. Waiting for processes to exit. Jan 21 01:01:41.806810 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:48862.service: Deactivated successfully. Jan 21 01:01:41.813435 kernel: audit: type=1106 audit(1768957301.801:879): pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.812844 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 01:01:41.801000 audit[6409]: CRED_DISP pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.820384 systemd-logind[2449]: Removed session 23. Jan 21 01:01:41.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.39:22-10.200.16.10:48862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:41.821804 kernel: audit: type=1104 audit(1768957301.801:880): pid=6409 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:41.884153 kubelet[3975]: E0121 01:01:41.884120 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:01:42.883360 kubelet[3975]: E0121 01:01:42.883222 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:01:43.883032 kubelet[3975]: E0121 01:01:43.882928 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:01:44.881918 kubelet[3975]: E0121 01:01:44.881221 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:01:45.881648 kubelet[3975]: E0121 01:01:45.881268 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:01:46.928058 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:48868.service - OpenSSH per-connection server daemon (10.200.16.10:48868). Jan 21 01:01:46.937718 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:46.937799 kernel: audit: type=1130 audit(1768957306.926:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.39:22-10.200.16.10:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:46.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.39:22-10.200.16.10:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:47.541000 audit[6425]: USER_ACCT pid=6425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.544367 sshd[6425]: Accepted publickey for core from 10.200.16.10 port 48868 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:47.546192 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:47.541000 audit[6425]: CRED_ACQ pid=6425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.553294 systemd-logind[2449]: New session 24 of user core. Jan 21 01:01:47.555740 kernel: audit: type=1101 audit(1768957307.541:883): pid=6425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.555813 kernel: audit: type=1103 audit(1768957307.541:884): pid=6425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.558709 kernel: audit: type=1006 audit(1768957307.541:885): pid=6425 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 01:01:47.541000 audit[6425]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe84d58db0 a2=3 a3=0 items=0 ppid=1 pid=6425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:47.562800 kernel: audit: type=1300 audit(1768957307.541:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe84d58db0 a2=3 a3=0 items=0 ppid=1 pid=6425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:47.563222 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 01:01:47.541000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:47.566000 audit[6425]: USER_START pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.575294 kernel: audit: type=1327 audit(1768957307.541:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:47.575369 kernel: audit: type=1105 audit(1768957307.566:886): pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.569000 audit[6429]: CRED_ACQ pid=6429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.583264 kernel: audit: type=1103 audit(1768957307.569:887): pid=6429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.881114 kubelet[3975]: E0121 01:01:47.881076 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:01:47.940207 sshd[6429]: Connection closed by 10.200.16.10 port 48868 Jan 21 01:01:47.940665 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:47.940000 audit[6425]: USER_END pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.947122 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:48868.service: Deactivated successfully. Jan 21 01:01:47.940000 audit[6425]: CRED_DISP pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.951640 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 01:01:47.952777 systemd-logind[2449]: Session 24 logged out. Waiting for processes to exit. Jan 21 01:01:47.955438 kernel: audit: type=1106 audit(1768957307.940:888): pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.955509 kernel: audit: type=1104 audit(1768957307.940:889): pid=6425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:47.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.39:22-10.200.16.10:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:47.957606 systemd-logind[2449]: Removed session 24. Jan 21 01:01:53.070300 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:46068.service - OpenSSH per-connection server daemon (10.200.16.10:46068). Jan 21 01:01:53.077832 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:53.077867 kernel: audit: type=1130 audit(1768957313.069:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.39:22-10.200.16.10:46068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:53.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.39:22-10.200.16.10:46068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:53.693000 audit[6443]: USER_ACCT pid=6443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.695546 sshd[6443]: Accepted publickey for core from 10.200.16.10 port 46068 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:53.698799 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:53.700189 kernel: audit: type=1101 audit(1768957313.693:892): pid=6443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.700492 kernel: audit: type=1103 audit(1768957313.696:893): pid=6443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.696000 audit[6443]: CRED_ACQ pid=6443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.707953 kernel: audit: type=1006 audit(1768957313.696:894): pid=6443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 01:01:53.713547 kernel: audit: type=1300 audit(1768957313.696:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda88b55f0 a2=3 a3=0 items=0 ppid=1 pid=6443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:53.696000 audit[6443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda88b55f0 a2=3 a3=0 items=0 ppid=1 pid=6443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:53.696000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:53.715800 kernel: audit: type=1327 audit(1768957313.696:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:53.717351 systemd-logind[2449]: New session 25 of user core. Jan 21 01:01:53.725060 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 01:01:53.727000 audit[6443]: USER_START pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.738282 kernel: audit: type=1105 audit(1768957313.727:895): pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.738347 kernel: audit: type=1103 audit(1768957313.732:896): pid=6447 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.732000 audit[6447]: CRED_ACQ pid=6447 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:53.882855 kubelet[3975]: E0121 01:01:53.882808 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:01:54.080816 sshd[6447]: Connection closed by 10.200.16.10 port 46068 Jan 21 01:01:54.081882 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:54.082000 audit[6443]: USER_END pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:54.086161 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:46068.service: Deactivated successfully. Jan 21 01:01:54.088937 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 01:01:54.090786 systemd-logind[2449]: Session 25 logged out. Waiting for processes to exit. Jan 21 01:01:54.092834 kernel: audit: type=1106 audit(1768957314.082:897): pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:54.091801 systemd-logind[2449]: Removed session 25. Jan 21 01:01:54.082000 audit[6443]: CRED_DISP pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:54.097800 kernel: audit: type=1104 audit(1768957314.082:898): pid=6443 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:54.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.39:22-10.200.16.10:46068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:54.882468 kubelet[3975]: E0121 01:01:54.882037 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:01:55.881638 kubelet[3975]: E0121 01:01:55.881518 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f69864b5-f6kfk" podUID="583e9439-b173-47b8-8158-974665ab3f14" Jan 21 01:01:56.880712 kubelet[3975]: E0121 01:01:56.880680 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:01:58.881379 kubelet[3975]: E0121 01:01:58.881296 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-6mvxn" podUID="52de65b6-e239-41f1-ad3a-143641236290" Jan 21 01:01:59.206102 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:46074.service - OpenSSH per-connection server daemon (10.200.16.10:46074). Jan 21 01:01:59.213734 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:59.213848 kernel: audit: type=1130 audit(1768957319.205:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.39:22-10.200.16.10:46074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:59.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.39:22-10.200.16.10:46074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:59.818797 kernel: audit: type=1101 audit(1768957319.812:901): pid=6460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.812000 audit[6460]: USER_ACCT pid=6460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.819787 sshd[6460]: Accepted publickey for core from 10.200.16.10 port 46074 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:01:59.822798 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:59.821000 audit[6460]: CRED_ACQ pid=6460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.831800 kernel: audit: type=1103 audit(1768957319.821:902): pid=6460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.836013 systemd-logind[2449]: New session 26 of user core. Jan 21 01:01:59.836790 kernel: audit: type=1006 audit(1768957319.821:903): pid=6460 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 21 01:01:59.821000 audit[6460]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdcd77d50 a2=3 a3=0 items=0 ppid=1 pid=6460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:59.844176 kernel: audit: type=1300 audit(1768957319.821:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdcd77d50 a2=3 a3=0 items=0 ppid=1 pid=6460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:59.821000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:59.846638 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 21 01:01:59.847925 kernel: audit: type=1327 audit(1768957319.821:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:59.851000 audit[6460]: USER_START pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.859797 kernel: audit: type=1105 audit(1768957319.851:904): pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.858000 audit[6464]: CRED_ACQ pid=6464 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:01:59.867809 kernel: audit: type=1103 audit(1768957319.858:905): pid=6464 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:00.231801 sshd[6464]: Connection closed by 10.200.16.10 port 46074 Jan 21 01:02:00.231947 sshd-session[6460]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:00.232000 audit[6460]: USER_END pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:00.238000 audit[6460]: CRED_DISP pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:00.242300 kernel: audit: type=1106 audit(1768957320.232:906): pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:00.242367 kernel: audit: type=1104 audit(1768957320.238:907): pid=6460 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:00.244569 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:46074.service: Deactivated successfully. Jan 21 01:02:00.246303 systemd[1]: session-26.scope: Deactivated successfully. Jan 21 01:02:00.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.39:22-10.200.16.10:46074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:00.248047 systemd-logind[2449]: Session 26 logged out. Waiting for processes to exit. Jan 21 01:02:00.248632 systemd-logind[2449]: Removed session 26. Jan 21 01:02:02.880798 kubelet[3975]: E0121 01:02:02.880458 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-d2xs2" podUID="346360e9-6dd0-47dd-8091-663997b6e137" Jan 21 01:02:05.354662 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:60654.service - OpenSSH per-connection server daemon (10.200.16.10:60654). Jan 21 01:02:05.361176 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:02:05.361257 kernel: audit: type=1130 audit(1768957325.354:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.39:22-10.200.16.10:60654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:05.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.39:22-10.200.16.10:60654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:05.965000 audit[6503]: USER_ACCT pid=6503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:05.974015 sshd[6503]: Accepted publickey for core from 10.200.16.10 port 60654 ssh2: RSA SHA256:6P3rLeTGuMtesju6nwnSc9d+K9uncMhWlrAd2WxpJUg Jan 21 01:02:05.975072 kernel: audit: type=1101 audit(1768957325.965:910): pid=6503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:05.983824 kernel: audit: type=1103 audit(1768957325.975:911): pid=6503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:05.975000 audit[6503]: CRED_ACQ pid=6503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:05.977385 sshd-session[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:02:05.995215 kernel: audit: type=1006 audit(1768957325.975:912): pid=6503 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 21 01:02:06.000376 systemd-logind[2449]: New session 27 of user core. Jan 21 01:02:06.000966 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 21 01:02:05.975000 audit[6503]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefeb0a300 a2=3 a3=0 items=0 ppid=1 pid=6503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:06.009949 kernel: audit: type=1300 audit(1768957325.975:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefeb0a300 a2=3 a3=0 items=0 ppid=1 pid=6503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:02:05.975000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:02:06.013816 kernel: audit: type=1327 audit(1768957325.975:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:02:06.013000 audit[6503]: USER_START pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.024212 kernel: audit: type=1105 audit(1768957326.013:913): pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.017000 audit[6507]: CRED_ACQ pid=6507 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.039806 kernel: audit: type=1103 audit(1768957326.017:914): pid=6507 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.398815 sshd[6507]: Connection closed by 10.200.16.10 port 60654 Jan 21 01:02:06.399922 sshd-session[6503]: pam_unix(sshd:session): session closed for user core Jan 21 01:02:06.401000 audit[6503]: USER_END pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.418728 kernel: audit: type=1106 audit(1768957326.401:915): pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.418814 kernel: audit: type=1104 audit(1768957326.401:916): pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.401000 audit[6503]: CRED_DISP pid=6503 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 21 01:02:06.414408 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:60654.service: Deactivated successfully. Jan 21 01:02:06.417499 systemd[1]: session-27.scope: Deactivated successfully. Jan 21 01:02:06.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.39:22-10.200.16.10:60654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:02:06.419717 systemd-logind[2449]: Session 27 logged out. Waiting for processes to exit. Jan 21 01:02:06.422053 systemd-logind[2449]: Removed session 27. Jan 21 01:02:06.880821 kubelet[3975]: E0121 01:02:06.880752 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6b85f" podUID="ce3bc266-4945-4335-b09f-5dc1a5736d5d" Jan 21 01:02:07.881758 kubelet[3975]: E0121 01:02:07.881712 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bb965959-wpbx9" podUID="fa6a1068-061f-4c26-9e2c-97c6b3c762d5" Jan 21 01:02:09.881560 kubelet[3975]: E0121 01:02:09.880574 3975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-588547dc94-gdj8l" podUID="f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e" Jan 21 01:02:09.900515 kubelet[3975]: E0121 01:02:09.900305 3975 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:54818->10.200.8.20:2379: read: connection reset by peer" event="&Event{ObjectMeta:{calico-kube-controllers-588547dc94-gdj8l.188c99328c8b4a49 calico-system 1679 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-588547dc94-gdj8l,UID:f5ae02c8-aa71-4ba8-969f-2dd0209a0e9e,APIVersion:v1,ResourceVersion:836,FieldPath:spec.containers{calico-kube-controllers},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547.0.0-n-ed178c4493,},FirstTimestamp:2026-01-21 00:59:56 +0000 UTC,LastTimestamp:2026-01-21 01:02:09.880520593 +0000 UTC m=+178.093249149,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-n-ed178c4493,}"