Jan 23 18:51:09.777285 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:51:09.777312 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:51:09.777324 kernel: BIOS-provided physical RAM map: Jan 23 18:51:09.777332 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:51:09.777339 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 18:51:09.777346 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 18:51:09.777354 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 18:51:09.777362 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 18:51:09.777369 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 18:51:09.777378 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 18:51:09.777385 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 18:51:09.777392 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 18:51:09.777399 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 18:51:09.777407 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 18:51:09.777416 kernel: NX (Execute Disable) protection: active Jan 23 18:51:09.777425 kernel: APIC: Static calls initialized Jan 23 18:51:09.777433 kernel: efi: EFI v2.7 by Microsoft Jan 23 18:51:09.777441 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9ac698 RNG=0x3ffd2018 Jan 23 18:51:09.777449 kernel: random: crng init done Jan 23 18:51:09.777457 kernel: secureboot: Secure boot disabled Jan 23 18:51:09.777464 kernel: SMBIOS 3.1.0 present. Jan 23 18:51:09.777472 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 18:51:09.777544 kernel: DMI: Memory slots populated: 2/2 Jan 23 18:51:09.777552 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 18:51:09.777560 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 18:51:09.777570 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 18:51:09.777578 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 18:51:09.777585 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 18:51:09.777593 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:51:09.777601 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:51:09.777609 kernel: tsc: Detected 2300.000 MHz processor Jan 23 18:51:09.777617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:51:09.777627 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:51:09.777635 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 18:51:09.777645 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:51:09.777653 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:51:09.777661 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 18:51:09.777669 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 18:51:09.777677 kernel: Using GB pages for direct mapping Jan 23 18:51:09.777685 kernel: ACPI: Early table checksum verification disabled Jan 23 18:51:09.777698 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 18:51:09.777707 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777715 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777724 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 18:51:09.777732 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 18:51:09.777741 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777751 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777760 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777768 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:51:09.777777 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:51:09.777786 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:51:09.777794 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 18:51:09.777804 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 18:51:09.777813 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 18:51:09.777821 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 18:51:09.777830 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 18:51:09.777838 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 18:51:09.777846 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 18:51:09.777854 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 18:51:09.777864 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 18:51:09.777872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 18:51:09.777881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 18:51:09.777890 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 18:51:09.777899 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 18:51:09.777907 kernel: Zone ranges: Jan 23 18:51:09.777915 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:51:09.777925 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:51:09.777933 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:51:09.777941 kernel: Device empty Jan 23 18:51:09.777949 kernel: Movable zone start for each node Jan 23 18:51:09.777957 kernel: Early memory node ranges Jan 23 18:51:09.777966 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:51:09.777974 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 18:51:09.777984 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 18:51:09.777992 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 18:51:09.778000 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:51:09.778009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 18:51:09.778017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:51:09.778026 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:51:09.778035 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 18:51:09.778046 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 18:51:09.778055 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 18:51:09.778064 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 18:51:09.778073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:51:09.778082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:51:09.778091 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:51:09.778100 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 18:51:09.778109 kernel: TSC deadline timer available Jan 23 18:51:09.778120 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:51:09.778129 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:51:09.778138 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:51:09.778147 kernel: CPU topo: Max. threads per core: 2 Jan 23 18:51:09.778156 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:51:09.778166 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:51:09.778175 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:51:09.778186 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 18:51:09.778195 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 18:51:09.778204 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:51:09.778215 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:51:09.778224 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:51:09.778233 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:51:09.778242 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:51:09.778253 kernel: Hyper-V: PV spinlocks enabled Jan 23 18:51:09.778262 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:51:09.778273 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:51:09.778283 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 18:51:09.778292 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:51:09.778302 kernel: Fallback order for Node 0: 0 Jan 23 18:51:09.778312 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 18:51:09.778322 kernel: Policy zone: Normal Jan 23 18:51:09.778331 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:51:09.778340 kernel: software IO TLB: area num 2. Jan 23 18:51:09.778350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:51:09.778359 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:51:09.778368 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:51:09.778378 kernel: Dynamic Preempt: voluntary Jan 23 18:51:09.778389 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:51:09.778399 kernel: rcu: RCU event tracing is enabled. Jan 23 18:51:09.778415 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:51:09.778427 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:51:09.778437 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:51:09.778446 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:51:09.778456 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:51:09.778466 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:51:09.781744 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:09.781769 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:09.781779 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:09.781789 kernel: Using NULL legacy PIC Jan 23 18:51:09.781797 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 18:51:09.781808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:51:09.781818 kernel: Console: colour dummy device 80x25 Jan 23 18:51:09.781827 kernel: printk: legacy console [tty1] enabled Jan 23 18:51:09.781836 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:51:09.781846 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 18:51:09.781856 kernel: ACPI: Core revision 20240827 Jan 23 18:51:09.781866 kernel: Failed to register legacy timer interrupt Jan 23 18:51:09.781877 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:51:09.781887 kernel: x2apic enabled Jan 23 18:51:09.781896 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:51:09.781905 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 18:51:09.781915 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 18:51:09.781925 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 18:51:09.781935 kernel: Hyper-V: Using IPI hypercalls Jan 23 18:51:09.781947 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 18:51:09.781956 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 18:51:09.781966 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 18:51:09.781975 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 18:51:09.781984 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 18:51:09.781993 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 18:51:09.782003 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:51:09.782015 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 23 18:51:09.782026 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:51:09.782035 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:51:09.782044 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:51:09.782052 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:51:09.782061 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:51:09.782069 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:51:09.782079 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 18:51:09.782091 kernel: RETBleed: Vulnerable Jan 23 18:51:09.782100 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:51:09.782109 kernel: active return thunk: its_return_thunk Jan 23 18:51:09.782117 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:51:09.782125 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:51:09.782134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:51:09.782142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:51:09.782152 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:51:09.782161 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:51:09.782171 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:51:09.782182 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 18:51:09.782190 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 18:51:09.782199 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 18:51:09.782208 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:51:09.782217 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:51:09.782226 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:51:09.782235 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:51:09.782244 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 18:51:09.782254 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 18:51:09.782263 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 18:51:09.782271 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 18:51:09.782281 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:51:09.782290 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:51:09.782299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:51:09.782308 kernel: landlock: Up and running. Jan 23 18:51:09.782317 kernel: SELinux: Initializing. Jan 23 18:51:09.782326 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:51:09.782336 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:51:09.782344 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 18:51:09.782353 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 18:51:09.782362 kernel: signal: max sigframe size: 11952 Jan 23 18:51:09.782374 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:51:09.782384 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:51:09.782394 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:51:09.782404 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:51:09.782414 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:51:09.782423 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:51:09.782432 kernel: .... node #0, CPUs: #1 Jan 23 18:51:09.782441 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:51:09.782452 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 23 18:51:09.782463 kernel: Memory: 8093404K/8383228K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 283608K reserved, 0K cma-reserved) Jan 23 18:51:09.782473 kernel: devtmpfs: initialized Jan 23 18:51:09.782502 kernel: x86/mm: Memory block size: 128MB Jan 23 18:51:09.782511 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 18:51:09.782520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:51:09.782532 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:51:09.782542 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:51:09.782552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:51:09.782562 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:51:09.782571 kernel: audit: type=2000 audit(1769194264.084:1): state=initialized audit_enabled=0 res=1 Jan 23 18:51:09.782580 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:51:09.782589 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:51:09.782599 kernel: cpuidle: using governor menu Jan 23 18:51:09.782610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:51:09.782620 kernel: dca service started, version 1.12.1 Jan 23 18:51:09.782630 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 18:51:09.782639 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 18:51:09.782648 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:51:09.782657 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:51:09.782668 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:51:09.782678 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:51:09.782688 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:51:09.782698 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:51:09.782708 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:51:09.782717 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:51:09.782726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:51:09.782736 kernel: ACPI: Interpreter enabled Jan 23 18:51:09.782746 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:51:09.782756 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:51:09.782766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:51:09.782775 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 18:51:09.782785 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 18:51:09.782794 kernel: iommu: Default domain type: Translated Jan 23 18:51:09.782803 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:51:09.782814 kernel: efivars: Registered efivars operations Jan 23 18:51:09.782823 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:51:09.782833 kernel: PCI: System does not support PCI Jan 23 18:51:09.782843 kernel: vgaarb: loaded Jan 23 18:51:09.782853 kernel: clocksource: Switched to clocksource tsc-early Jan 23 18:51:09.782862 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:51:09.782871 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:51:09.782881 kernel: pnp: PnP ACPI init Jan 23 18:51:09.782890 kernel: pnp: PnP ACPI: found 3 devices Jan 23 18:51:09.782900 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:51:09.782910 kernel: NET: Registered PF_INET protocol family Jan 23 18:51:09.782920 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 18:51:09.782929 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 18:51:09.782939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:51:09.782950 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:51:09.782959 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 18:51:09.782968 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 18:51:09.782978 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:51:09.782988 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:51:09.782997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:51:09.783007 kernel: NET: Registered PF_XDP protocol family Jan 23 18:51:09.783018 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:51:09.783028 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:51:09.783037 kernel: software IO TLB: mapped [mem 0x000000003a9ac000-0x000000003e9ac000] (64MB) Jan 23 18:51:09.783046 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 18:51:09.783056 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 18:51:09.783066 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:51:09.783076 kernel: clocksource: Switched to clocksource tsc Jan 23 18:51:09.783087 kernel: Initialise system trusted keyrings Jan 23 18:51:09.783097 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 18:51:09.783106 kernel: Key type asymmetric registered Jan 23 18:51:09.783115 kernel: Asymmetric key parser 'x509' registered Jan 23 18:51:09.783124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:51:09.783134 kernel: io scheduler mq-deadline registered Jan 23 18:51:09.783144 kernel: io scheduler kyber registered Jan 23 18:51:09.783156 kernel: io scheduler bfq registered Jan 23 18:51:09.783165 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:51:09.783174 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:51:09.783184 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:51:09.783193 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 18:51:09.783202 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:51:09.783212 kernel: i8042: PNP: No PS/2 controller found. Jan 23 18:51:09.783388 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 18:51:09.783515 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T18:51:06 UTC (1769194266) Jan 23 18:51:09.783619 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 18:51:09.783631 kernel: intel_pstate: Intel P-state driver initializing Jan 23 18:51:09.783642 kernel: efifb: probing for efifb Jan 23 18:51:09.783652 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 18:51:09.783664 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 18:51:09.783673 kernel: efifb: scrolling: redraw Jan 23 18:51:09.783681 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:51:09.783691 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:51:09.783701 kernel: fb0: EFI VGA frame buffer device Jan 23 18:51:09.783711 kernel: pstore: Using crash dump compression: deflate Jan 23 18:51:09.783721 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:51:09.783732 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:51:09.783740 kernel: Segment Routing with IPv6 Jan 23 18:51:09.783749 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:51:09.783758 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:51:09.783768 kernel: Key type dns_resolver registered Jan 23 18:51:09.783778 kernel: IPI shorthand broadcast: enabled Jan 23 18:51:09.783788 kernel: sched_clock: Marking stable (1940005623, 98739911)->(2403942450, -365196916) Jan 23 18:51:09.783797 kernel: registered taskstats version 1 Jan 23 18:51:09.783808 kernel: Loading compiled-in X.509 certificates Jan 23 18:51:09.783817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:51:09.783826 kernel: Demotion targets for Node 0: null Jan 23 18:51:09.783836 kernel: Key type .fscrypt registered Jan 23 18:51:09.783845 kernel: Key type fscrypt-provisioning registered Jan 23 18:51:09.783855 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:51:09.783865 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:51:09.783876 kernel: ima: No architecture policies found Jan 23 18:51:09.783885 kernel: clk: Disabling unused clocks Jan 23 18:51:09.783893 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:51:09.783902 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:51:09.783912 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:51:09.783921 kernel: Run /init as init process Jan 23 18:51:09.783931 kernel: with arguments: Jan 23 18:51:09.783943 kernel: /init Jan 23 18:51:09.783952 kernel: with environment: Jan 23 18:51:09.783961 kernel: HOME=/ Jan 23 18:51:09.783969 kernel: TERM=linux Jan 23 18:51:09.783978 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 18:51:09.783988 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:51:09.783997 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:51:09.784007 kernel: PTP clock support registered Jan 23 18:51:09.784018 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 18:51:09.784028 kernel: SCSI subsystem initialized Jan 23 18:51:09.784038 kernel: hv_vmbus: registering driver hv_utils Jan 23 18:51:09.784047 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 18:51:09.784056 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 18:51:09.784065 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 18:51:09.784074 kernel: hv_vmbus: registering driver hv_pci Jan 23 18:51:09.784213 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 18:51:09.784326 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 18:51:09.784455 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 18:51:09.784588 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:51:09.784723 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 18:51:09.784846 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 18:51:09.784960 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:51:09.785084 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 18:51:09.787290 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 18:51:09.787457 kernel: scsi host0: storvsc_host_t Jan 23 18:51:09.787619 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 18:51:09.787634 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:51:09.787644 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 18:51:09.787654 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 18:51:09.787773 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 18:51:09.787787 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 18:51:09.787799 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 18:51:09.787908 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 18:51:09.788040 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 18:51:09.788130 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 18:51:09.788142 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:51:09.788263 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 18:51:09.788275 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:51:09.788395 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 18:51:09.788407 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:51:09.788417 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:51:09.788427 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:51:09.788439 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:51:09.788468 kernel: raid6: avx512x4 gen() 44154 MB/s Jan 23 18:51:09.788499 kernel: raid6: avx512x2 gen() 42956 MB/s Jan 23 18:51:09.788509 kernel: raid6: avx512x1 gen() 25293 MB/s Jan 23 18:51:09.788519 kernel: raid6: avx2x4 gen() 35197 MB/s Jan 23 18:51:09.788529 kernel: raid6: avx2x2 gen() 36786 MB/s Jan 23 18:51:09.788538 kernel: raid6: avx2x1 gen() 30332 MB/s Jan 23 18:51:09.788548 kernel: raid6: using algorithm avx512x4 gen() 44154 MB/s Jan 23 18:51:09.788562 kernel: raid6: .... xor() 7672 MB/s, rmw enabled Jan 23 18:51:09.788572 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:51:09.788582 kernel: xor: automatically using best checksumming function avx Jan 23 18:51:09.788592 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:51:09.788603 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (934) Jan 23 18:51:09.788613 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:51:09.788623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:09.788637 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:51:09.788647 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:51:09.788657 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:51:09.788666 kernel: loop: module loaded Jan 23 18:51:09.788677 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:51:09.788686 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:51:09.788699 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:51:09.788717 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:51:09.788729 systemd[1]: Detected virtualization microsoft. Jan 23 18:51:09.788739 systemd[1]: Detected architecture x86-64. Jan 23 18:51:09.788749 systemd[1]: Running in initrd. Jan 23 18:51:09.788759 systemd[1]: No hostname configured, using default hostname. Jan 23 18:51:09.788769 systemd[1]: Hostname set to . Jan 23 18:51:09.788783 systemd[1]: Initializing machine ID from random generator. Jan 23 18:51:09.788797 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:51:09.788808 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:51:09.788819 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:09.788830 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:09.788841 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:51:09.788856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:51:09.788867 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:51:09.788878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:51:09.788890 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:09.788903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:09.788915 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:51:09.788927 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:51:09.788937 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:51:09.788949 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:51:09.788959 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:51:09.788971 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:51:09.788981 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:51:09.788990 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:51:09.789001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:51:09.789013 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:51:09.789024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:09.789036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:09.789052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:09.789063 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:51:09.789075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:51:09.789085 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:51:09.789095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:51:09.789106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:51:09.789116 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:51:09.789130 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:51:09.789141 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:51:09.789151 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:51:09.789161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:09.789175 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:51:09.789207 systemd-journald[1071]: Collecting audit messages is enabled. Jan 23 18:51:09.789233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:09.789247 kernel: audit: type=1130 audit(1769194269.773:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.789260 kernel: audit: type=1130 audit(1769194269.784:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.789271 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:51:09.789284 systemd-journald[1071]: Journal started Jan 23 18:51:09.789309 systemd-journald[1071]: Runtime Journal (/run/log/journal/bd1b77b835c442f99e27fb9d6f0c6cef) is 8M, max 158.5M, 150.5M free. Jan 23 18:51:09.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.796797 kernel: audit: type=1130 audit(1769194269.791:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.796836 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:51:09.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.806084 kernel: audit: type=1130 audit(1769194269.798:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.802615 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:51:09.814769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:51:09.935036 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:51:09.939749 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:51:09.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.953492 kernel: audit: type=1130 audit(1769194269.945:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.953927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:09.965048 kernel: audit: type=1130 audit(1769194269.958:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.965218 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:51:09.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.966579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:51:09.980516 kernel: Bridge firewalling registered Jan 23 18:51:09.980879 systemd-modules-load[1074]: Inserted module 'br_netfilter' Jan 23 18:51:09.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.982847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:09.997279 kernel: audit: type=1130 audit(1769194269.982:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.997296 kernel: audit: type=1130 audit(1769194269.982:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:09.983409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:09.984259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:51:10.004818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:10.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.010492 kernel: audit: type=1130 audit(1769194270.004:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.010000 audit: BPF prog-id=6 op=LOAD Jan 23 18:51:10.012656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:51:10.023727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:10.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.032591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:51:10.060221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:51:10.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.065921 systemd-resolved[1096]: Positive Trust Anchors: Jan 23 18:51:10.065934 systemd-resolved[1096]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:51:10.065938 systemd-resolved[1096]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:51:10.065970 systemd-resolved[1096]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:51:10.068733 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:51:10.101972 systemd-resolved[1096]: Defaulting to hostname 'linux'. Jan 23 18:51:10.104645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:51:10.107210 dracut-cmdline[1110]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:51:10.120657 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 23 18:51:10.120684 kernel: audit: type=1130 audit(1769194270.119:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.120644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:10.260499 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:51:10.353509 kernel: iscsi: registered transport (tcp) Jan 23 18:51:10.407561 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:51:10.407608 kernel: QLogic iSCSI HBA Driver Jan 23 18:51:10.462202 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:51:10.473133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:10.483582 kernel: audit: type=1130 audit(1769194270.474:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.480303 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:51:10.514027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:51:10.519821 kernel: audit: type=1130 audit(1769194270.515:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.520334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:51:10.522591 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:51:10.558509 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:51:10.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.566513 kernel: audit: type=1130 audit(1769194270.559:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.566664 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:10.574570 kernel: audit: type=1334 audit(1769194270.560:18): prog-id=7 op=LOAD Jan 23 18:51:10.574592 kernel: audit: type=1334 audit(1769194270.565:19): prog-id=8 op=LOAD Jan 23 18:51:10.560000 audit: BPF prog-id=7 op=LOAD Jan 23 18:51:10.565000 audit: BPF prog-id=8 op=LOAD Jan 23 18:51:10.595747 systemd-udevd[1357]: Using default interface naming scheme 'v257'. Jan 23 18:51:10.606681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:10.616001 kernel: audit: type=1130 audit(1769194270.606:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.611457 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:51:10.636904 dracut-pre-trigger[1411]: rd.md=0: removing MD RAID activation Jan 23 18:51:10.646926 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:51:10.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.657572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:51:10.665552 kernel: audit: type=1130 audit(1769194270.654:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.665574 kernel: audit: type=1334 audit(1769194270.656:22): prog-id=9 op=LOAD Jan 23 18:51:10.656000 audit: BPF prog-id=9 op=LOAD Jan 23 18:51:10.673213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:51:10.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.679583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:51:10.688200 kernel: audit: type=1130 audit(1769194270.677:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.709922 systemd-networkd[1473]: lo: Link UP Jan 23 18:51:10.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.710559 systemd-networkd[1473]: lo: Gained carrier Jan 23 18:51:10.710973 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:51:10.713440 systemd[1]: Reached target network.target - Network. Jan 23 18:51:10.738920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:10.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.747221 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:51:10.822463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:10.822721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:10.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.827730 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:10.834735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:10.851129 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:51:10.851169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#164 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:51:10.865715 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 18:51:10.875756 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523403aa (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 18:51:10.891510 kernel: AES CTR mode by8 optimization enabled Jan 23 18:51:10.891868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:10.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:10.904363 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:51:10.904372 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:51:10.908143 systemd-networkd[1473]: eth0: Link UP Jan 23 18:51:10.908294 systemd-networkd[1473]: eth0: Gained carrier Jan 23 18:51:10.908306 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:51:10.918549 systemd-networkd[1473]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:51:10.992496 kernel: nvme nvme0: using unchecked data buffer Jan 23 18:51:11.067699 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:51:11.070534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:51:11.182869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:51:11.204735 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 18:51:11.218855 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 18:51:11.303917 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:51:11.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:11.304551 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:51:11.310398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:11.313573 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:51:11.330367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:51:11.375183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:51:11.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:11.897276 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 18:51:11.897550 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 18:51:11.900306 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 18:51:11.902030 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:51:11.906665 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 18:51:11.910611 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 18:51:11.915764 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 18:51:11.915841 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 18:51:11.930074 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:51:11.930282 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 18:51:11.934544 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 18:51:11.990943 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 18:51:12.000494 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 18:51:12.004339 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523403aa eth0: VF registering: eth1 Jan 23 18:51:12.004527 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 18:51:12.008501 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 18:51:12.008933 systemd-networkd[1473]: eth1: Interface name change detected, renamed to enP30832s1. Jan 23 18:51:12.112501 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:51:12.116104 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:51:12.116348 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523403aa eth0: Data path switched to VF: enP30832s1 Jan 23 18:51:12.116665 systemd-networkd[1473]: enP30832s1: Link UP Jan 23 18:51:12.117703 systemd-networkd[1473]: enP30832s1: Gained carrier Jan 23 18:51:12.261675 systemd-networkd[1473]: eth0: Gained IPv6LL Jan 23 18:51:12.371228 disk-uuid[1627]: Warning: The kernel is still using the old partition table. Jan 23 18:51:12.371228 disk-uuid[1627]: The new table will be used at the next reboot or after you Jan 23 18:51:12.371228 disk-uuid[1627]: run partprobe(8) or kpartx(8) Jan 23 18:51:12.371228 disk-uuid[1627]: The operation has completed successfully. Jan 23 18:51:12.381341 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:51:12.381433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:51:12.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:12.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:12.387538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:51:12.424491 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1673) Jan 23 18:51:12.427081 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:51:12.427113 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:12.444914 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:51:12.444947 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:51:12.445020 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:51:12.451523 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:51:12.452111 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:51:12.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:12.455645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:51:13.827602 ignition[1692]: Ignition 2.24.0 Jan 23 18:51:13.827614 ignition[1692]: Stage: fetch-offline Jan 23 18:51:13.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:13.830286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:51:13.827866 ignition[1692]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:13.834268 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:51:13.827876 ignition[1692]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:13.827968 ignition[1692]: parsed url from cmdline: "" Jan 23 18:51:13.827971 ignition[1692]: no config URL provided Jan 23 18:51:13.827975 ignition[1692]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:51:13.827981 ignition[1692]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:51:13.827986 ignition[1692]: failed to fetch config: resource requires networking Jan 23 18:51:13.829139 ignition[1692]: Ignition finished successfully Jan 23 18:51:13.854919 ignition[1699]: Ignition 2.24.0 Jan 23 18:51:13.854925 ignition[1699]: Stage: fetch Jan 23 18:51:13.855145 ignition[1699]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:13.855152 ignition[1699]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:13.855235 ignition[1699]: parsed url from cmdline: "" Jan 23 18:51:13.855238 ignition[1699]: no config URL provided Jan 23 18:51:13.855243 ignition[1699]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:51:13.855248 ignition[1699]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:51:13.855268 ignition[1699]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 18:51:13.918458 ignition[1699]: GET result: OK Jan 23 18:51:13.918552 ignition[1699]: config has been read from IMDS userdata Jan 23 18:51:13.918580 ignition[1699]: parsing config with SHA512: 3f8c2ddd535ca3366f90ac33b1580093cdc8d623984a13a46c2e92cac13607b4dc0de3b27c4e92a5930c87134f2500f4b1fcc3c603e9d015e12664aa045861b6 Jan 23 18:51:13.924328 unknown[1699]: fetched base config from "system" Jan 23 18:51:13.924339 unknown[1699]: fetched base config from "system" Jan 23 18:51:13.924667 ignition[1699]: fetch: fetch complete Jan 23 18:51:13.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:13.924344 unknown[1699]: fetched user config from "azure" Jan 23 18:51:13.924672 ignition[1699]: fetch: fetch passed Jan 23 18:51:13.926468 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:51:13.924708 ignition[1699]: Ignition finished successfully Jan 23 18:51:13.931628 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:51:13.958729 ignition[1705]: Ignition 2.24.0 Jan 23 18:51:13.958740 ignition[1705]: Stage: kargs Jan 23 18:51:13.958930 ignition[1705]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:13.958937 ignition[1705]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:13.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:13.962497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:51:13.959655 ignition[1705]: kargs: kargs passed Jan 23 18:51:13.966123 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:51:13.959687 ignition[1705]: Ignition finished successfully Jan 23 18:51:13.992875 ignition[1711]: Ignition 2.24.0 Jan 23 18:51:13.992886 ignition[1711]: Stage: disks Jan 23 18:51:13.995196 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:51:13.993085 ignition[1711]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:13.993093 ignition[1711]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:13.993792 ignition[1711]: disks: disks passed Jan 23 18:51:13.993825 ignition[1711]: Ignition finished successfully Jan 23 18:51:14.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:14.004449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:51:14.007511 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:51:14.010535 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:51:14.015529 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:51:14.018203 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:51:14.022606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:51:14.096146 systemd-fsck[1719]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 23 18:51:14.100232 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:51:14.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:14.105446 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:51:14.436492 kernel: EXT4-fs (nvme0n1p9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:51:14.436759 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:51:14.440902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:51:14.470146 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:51:14.483557 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:51:14.487404 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:51:14.493498 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1728) Jan 23 18:51:14.493561 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:51:14.494008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:51:14.500502 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:51:14.502501 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:14.505068 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:51:14.510672 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:51:14.518821 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:51:14.518843 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:51:14.518853 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:51:14.518581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:51:15.128039 coreos-metadata[1730]: Jan 23 18:51:15.127 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:51:15.131613 coreos-metadata[1730]: Jan 23 18:51:15.130 INFO Fetch successful Jan 23 18:51:15.131613 coreos-metadata[1730]: Jan 23 18:51:15.130 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:51:15.142450 coreos-metadata[1730]: Jan 23 18:51:15.142 INFO Fetch successful Jan 23 18:51:15.156642 coreos-metadata[1730]: Jan 23 18:51:15.156 INFO wrote hostname ci-4547.1.0-a-90f1f3b2aa to /sysroot/etc/hostname Jan 23 18:51:15.160266 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:51:15.168641 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 23 18:51:15.168667 kernel: audit: type=1130 audit(1769194275.163:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:15.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.199534 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:51:16.202149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:51:16.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.207595 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:51:16.218125 kernel: audit: type=1130 audit(1769194276.200:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.248750 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:51:16.254494 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:51:16.264790 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:51:16.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.274529 kernel: audit: type=1130 audit(1769194276.268:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.279175 ignition[1832]: INFO : Ignition 2.24.0 Jan 23 18:51:16.279175 ignition[1832]: INFO : Stage: mount Jan 23 18:51:16.281794 ignition[1832]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:16.281794 ignition[1832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:16.281794 ignition[1832]: INFO : mount: mount passed Jan 23 18:51:16.281794 ignition[1832]: INFO : Ignition finished successfully Jan 23 18:51:16.281433 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:51:16.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.290613 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:51:16.296541 kernel: audit: type=1130 audit(1769194276.289:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:16.307748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:51:16.328275 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1842) Jan 23 18:51:16.328313 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:51:16.329900 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:16.334706 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:51:16.334745 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:51:16.334757 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:51:16.337303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:51:16.358861 ignition[1859]: INFO : Ignition 2.24.0 Jan 23 18:51:16.358861 ignition[1859]: INFO : Stage: files Jan 23 18:51:16.361249 ignition[1859]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:16.361249 ignition[1859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:16.361249 ignition[1859]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:51:16.385115 ignition[1859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:51:16.385115 ignition[1859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:51:16.456597 ignition[1859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:51:16.459555 ignition[1859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:51:16.459555 ignition[1859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:51:16.458387 unknown[1859]: wrote ssh authorized keys file for user: core Jan 23 18:51:16.472087 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:51:16.477541 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 18:51:16.544360 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:51:16.605966 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:51:16.609233 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:51:16.633970 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 18:51:17.090805 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:51:17.772324 ignition[1859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:51:17.772324 ignition[1859]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:51:17.838289 ignition[1859]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:51:17.846850 ignition[1859]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:51:17.846850 ignition[1859]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:51:17.846850 ignition[1859]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:51:17.864542 ignition[1859]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:51:17.864542 ignition[1859]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:51:17.864542 ignition[1859]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:51:17.864542 ignition[1859]: INFO : files: files passed Jan 23 18:51:17.864542 ignition[1859]: INFO : Ignition finished successfully Jan 23 18:51:17.884536 kernel: audit: type=1130 audit(1769194277.869:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.853137 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:51:17.875101 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:51:17.889540 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:51:17.897204 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:51:17.908443 kernel: audit: type=1130 audit(1769194277.900:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.908468 kernel: audit: type=1131 audit(1769194277.900:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.897293 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:51:17.915405 initrd-setup-root-after-ignition[1891]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:17.915405 initrd-setup-root-after-ignition[1891]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:17.920549 initrd-setup-root-after-ignition[1895]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:17.923823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:51:17.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.928929 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:51:17.932493 kernel: audit: type=1130 audit(1769194277.927:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.935425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:51:17.978804 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:51:17.978919 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:51:17.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.985951 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:51:18.000523 kernel: audit: type=1130 audit(1769194277.985:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.000549 kernel: audit: type=1131 audit(1769194277.985:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:17.990835 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:51:17.994841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:51:17.995558 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:51:18.020281 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:51:18.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.024625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:51:18.039836 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:51:18.040057 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:18.043086 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:18.046649 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:51:18.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.049327 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:51:18.049440 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:51:18.058198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:51:18.060831 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:51:18.063611 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:51:18.066104 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:51:18.068626 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:51:18.071686 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:51:18.076633 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:51:18.079393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:51:18.081325 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:51:18.084096 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:51:18.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.085515 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:51:18.088795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:51:18.088931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:51:18.092872 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:18.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.093163 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:18.097080 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:51:18.099355 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:18.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.104731 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:51:18.104836 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:51:18.113585 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:51:18.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.113709 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:51:18.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.119576 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:51:18.119682 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:51:18.125440 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:51:18.125568 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:51:18.130085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:51:18.139470 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:51:18.146603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:51:18.146766 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:18.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.164788 ignition[1915]: INFO : Ignition 2.24.0 Jan 23 18:51:18.164788 ignition[1915]: INFO : Stage: umount Jan 23 18:51:18.164788 ignition[1915]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:18.164788 ignition[1915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:51:18.164788 ignition[1915]: INFO : umount: umount passed Jan 23 18:51:18.164788 ignition[1915]: INFO : Ignition finished successfully Jan 23 18:51:18.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.159310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:51:18.159425 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:18.162030 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:51:18.162159 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:51:18.182665 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:51:18.183160 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:51:18.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.189128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:51:18.189215 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:51:18.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.194915 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:51:18.194990 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:51:18.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.198085 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:51:18.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.198122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:51:18.204155 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:51:18.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.204205 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:51:18.207321 systemd[1]: Stopped target network.target - Network. Jan 23 18:51:18.212210 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:51:18.212256 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:51:18.216148 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:51:18.218657 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:51:18.223145 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:18.234419 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:51:18.237913 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:51:18.238959 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:51:18.240380 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:51:18.244093 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:51:18.244115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:51:18.248243 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:51:18.248261 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:51:18.252047 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:51:18.252094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:51:18.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.260396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:51:18.260442 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:51:18.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.264849 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:51:18.268773 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:51:18.272618 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:51:18.275954 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:51:18.276053 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:51:18.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.280000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:51:18.281056 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:51:18.281129 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:51:18.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.286000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:51:18.287283 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:51:18.290301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:51:18.290338 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:18.296571 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:51:18.299546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:51:18.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.299604 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:51:18.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.303553 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:51:18.303600 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:18.307571 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:51:18.307615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:18.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.318613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:18.338192 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:51:18.339817 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:18.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.345365 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:51:18.356582 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523403aa eth0: Data path switched from VF: enP30832s1 Jan 23 18:51:18.358472 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:51:18.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.345498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:18.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.349888 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:51:18.349923 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:18.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.350414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:51:18.350460 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:51:18.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.356660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:51:18.356711 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:51:18.358174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:51:18.358209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:51:18.362393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:51:18.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.363348 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:51:18.363388 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:18.370581 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:51:18.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.370632 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:18.372943 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 18:51:18.372989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:51:18.378806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:51:18.378843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:18.379277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:18.379309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:18.380289 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:51:18.386558 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:51:18.400721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:51:18.400807 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:51:18.667391 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:51:18.667527 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:51:18.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.671969 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:51:18.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:18.672381 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:51:18.672436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:51:18.674598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:51:18.685975 systemd[1]: Switching root. Jan 23 18:51:18.790669 systemd-journald[1071]: Journal stopped Jan 23 18:51:23.102576 systemd-journald[1071]: Received SIGTERM from PID 1 (systemd). Jan 23 18:51:23.102607 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:51:23.102623 kernel: SELinux: policy capability open_perms=1 Jan 23 18:51:23.102633 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:51:23.102643 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:51:23.102653 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:51:23.102665 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:51:23.102674 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:51:23.102685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:51:23.102694 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:51:23.102705 systemd[1]: Successfully loaded SELinux policy in 149.350ms. Jan 23 18:51:23.102717 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.243ms. Jan 23 18:51:23.102730 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:51:23.102744 systemd[1]: Detected virtualization microsoft. Jan 23 18:51:23.102755 systemd[1]: Detected architecture x86-64. Jan 23 18:51:23.102770 systemd[1]: Detected first boot. Jan 23 18:51:23.102781 systemd[1]: Hostname set to . Jan 23 18:51:23.102794 systemd[1]: Initializing machine ID from random generator. Jan 23 18:51:23.102805 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 23 18:51:23.102816 kernel: audit: type=1334 audit(1769194280.758:90): prog-id=10 op=LOAD Jan 23 18:51:23.102826 kernel: audit: type=1334 audit(1769194280.758:91): prog-id=10 op=UNLOAD Jan 23 18:51:23.102836 kernel: audit: type=1334 audit(1769194280.758:92): prog-id=11 op=LOAD Jan 23 18:51:23.102845 kernel: audit: type=1334 audit(1769194280.758:93): prog-id=11 op=UNLOAD Jan 23 18:51:23.102858 zram_generator::config[1958]: No configuration found. Jan 23 18:51:23.102870 kernel: Guest personality initialized and is inactive Jan 23 18:51:23.102881 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 18:51:23.102891 kernel: Initialized host personality Jan 23 18:51:23.102902 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:51:23.102912 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:51:23.102922 kernel: audit: type=1334 audit(1769194282.678:94): prog-id=12 op=LOAD Jan 23 18:51:23.102933 kernel: audit: type=1334 audit(1769194282.678:95): prog-id=3 op=UNLOAD Jan 23 18:51:23.102944 kernel: audit: type=1334 audit(1769194282.678:96): prog-id=13 op=LOAD Jan 23 18:51:23.102954 kernel: audit: type=1334 audit(1769194282.678:97): prog-id=14 op=LOAD Jan 23 18:51:23.102964 kernel: audit: type=1334 audit(1769194282.678:98): prog-id=4 op=UNLOAD Jan 23 18:51:23.102974 kernel: audit: type=1334 audit(1769194282.678:99): prog-id=5 op=UNLOAD Jan 23 18:51:23.102984 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:51:23.102995 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:51:23.103007 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:51:23.103022 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:51:23.103034 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:51:23.103049 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:51:23.103060 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:51:23.103071 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:51:23.103083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:51:23.103100 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:51:23.103111 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:51:23.103123 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:23.103135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:23.103145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:51:23.103158 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:51:23.103169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:51:23.103179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:51:23.103190 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:51:23.103203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:23.103214 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:23.103228 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:51:23.103239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:51:23.103249 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:51:23.103260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:51:23.103271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:23.103283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:51:23.103295 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:51:23.103309 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:51:23.103319 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:51:23.103330 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:51:23.103341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:51:23.103355 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:51:23.103366 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:51:23.103378 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:51:23.103389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:23.103400 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:51:23.103410 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:51:23.103423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:23.103435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:23.103447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:51:23.103458 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:51:23.103469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:51:23.103494 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:51:23.103507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:23.103521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:51:23.103533 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:51:23.103544 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:51:23.103555 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:51:23.103566 systemd[1]: Reached target machines.target - Containers. Jan 23 18:51:23.103577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:51:23.103592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:23.103604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:51:23.103616 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:51:23.103627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:23.103638 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:51:23.105576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:23.105592 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:51:23.105608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:51:23.105622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:51:23.105643 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:51:23.105664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:51:23.105680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:51:23.105693 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:51:23.105706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:23.105720 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:51:23.105731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:51:23.105743 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:51:23.105755 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:51:23.105767 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:51:23.105778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:51:23.105792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:23.105804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:51:23.105815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:51:23.105826 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:51:23.105838 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:51:23.105849 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:51:23.105860 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:51:23.105873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:23.105910 systemd-journald[2039]: Collecting audit messages is enabled. Jan 23 18:51:23.105938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:51:23.105952 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:51:23.105963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:23.105976 systemd-journald[2039]: Journal started Jan 23 18:51:23.106005 systemd-journald[2039]: Runtime Journal (/run/log/journal/32643f3c2f30429db14fd465357719ab) is 8M, max 158.5M, 150.5M free. Jan 23 18:51:22.814000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:51:22.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:22.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:22.998000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:51:22.998000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:51:22.999000 audit: BPF prog-id=15 op=LOAD Jan 23 18:51:22.999000 audit: BPF prog-id=16 op=LOAD Jan 23 18:51:22.999000 audit: BPF prog-id=17 op=LOAD Jan 23 18:51:23.094000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:51:23.094000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe7deaa050 a2=4000 a3=0 items=0 ppid=1 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:51:23.094000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:51:23.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:22.666696 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:51:22.679648 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 18:51:23.108500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:22.680941 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:51:23.114401 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:51:23.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.116341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:23.116628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:23.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.121257 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:51:23.121458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:51:23.121512 kernel: fuse: init (API version 7.41) Jan 23 18:51:23.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.125060 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:51:23.125230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:51:23.128762 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:23.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.130580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:23.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.134255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:51:23.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.139584 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:51:23.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.153261 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:51:23.156122 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:51:23.162569 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:51:23.169878 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:51:23.172188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:51:23.172219 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:51:23.179309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:51:23.183692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:23.183796 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:51:23.185094 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:51:23.191657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:51:23.194083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:51:23.197593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:51:23.199594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:51:23.201596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:51:23.206630 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:51:23.219798 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:51:23.223379 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:51:23.225547 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:51:23.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.232283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:23.236436 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:51:23.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.240345 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:51:23.244306 systemd-journald[2039]: Time spent on flushing to /var/log/journal/32643f3c2f30429db14fd465357719ab is 15.203ms for 1131 entries. Jan 23 18:51:23.244306 systemd-journald[2039]: System Journal (/var/log/journal/32643f3c2f30429db14fd465357719ab) is 8M, max 2.2G, 2.2G free. Jan 23 18:51:23.283365 systemd-journald[2039]: Received client request to flush runtime journal. Jan 23 18:51:23.283412 kernel: ACPI: bus type drm_connector registered Jan 23 18:51:23.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.250708 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:51:23.258659 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:51:23.258794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:51:23.284190 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:51:23.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.297428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:23.302498 kernel: loop1: detected capacity change from 0 to 27728 Jan 23 18:51:23.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.369812 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:51:23.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.420217 systemd-tmpfiles[2074]: ACLs are not supported, ignoring. Jan 23 18:51:23.420234 systemd-tmpfiles[2074]: ACLs are not supported, ignoring. Jan 23 18:51:23.423241 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:51:23.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.687532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:51:23.770552 kernel: loop2: detected capacity change from 0 to 50784 Jan 23 18:51:23.780620 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:51:23.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:23.785619 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:51:23.955865 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:51:23.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.017203 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:51:24.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.019000 audit: BPF prog-id=18 op=LOAD Jan 23 18:51:24.020000 audit: BPF prog-id=19 op=LOAD Jan 23 18:51:24.020000 audit: BPF prog-id=20 op=LOAD Jan 23 18:51:24.021110 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:51:24.024000 audit: BPF prog-id=21 op=LOAD Jan 23 18:51:24.027619 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:51:24.031600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:51:24.034000 audit: BPF prog-id=22 op=LOAD Jan 23 18:51:24.034000 audit: BPF prog-id=23 op=LOAD Jan 23 18:51:24.034000 audit: BPF prog-id=24 op=LOAD Jan 23 18:51:24.036573 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:51:24.045000 audit: BPF prog-id=25 op=LOAD Jan 23 18:51:24.045000 audit: BPF prog-id=26 op=LOAD Jan 23 18:51:24.045000 audit: BPF prog-id=27 op=LOAD Jan 23 18:51:24.046684 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:51:24.061918 systemd-tmpfiles[2125]: ACLs are not supported, ignoring. Jan 23 18:51:24.061935 systemd-tmpfiles[2125]: ACLs are not supported, ignoring. Jan 23 18:51:24.065106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:24.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.068000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:51:24.068000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:51:24.069000 audit: BPF prog-id=28 op=LOAD Jan 23 18:51:24.069000 audit: BPF prog-id=29 op=LOAD Jan 23 18:51:24.070337 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:24.102603 systemd-udevd[2130]: Using default interface naming scheme 'v257'. Jan 23 18:51:24.111819 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:51:24.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.121965 systemd-nsresourced[2126]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:51:24.124152 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:51:24.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.188961 systemd-oomd[2123]: No swap; memory pressure usage will be degraded Jan 23 18:51:24.189408 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:51:24.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.211751 systemd-resolved[2124]: Positive Trust Anchors: Jan 23 18:51:24.211763 systemd-resolved[2124]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:51:24.211767 systemd-resolved[2124]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:51:24.211802 systemd-resolved[2124]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:51:24.241143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:24.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.243000 audit: BPF prog-id=30 op=LOAD Jan 23 18:51:24.247644 kernel: loop3: detected capacity change from 0 to 111560 Jan 23 18:51:24.247200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:51:24.312088 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:51:24.357633 systemd-resolved[2124]: Using system hostname 'ci-4547.1.0-a-90f1f3b2aa'. Jan 23 18:51:24.359241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:51:24.358815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:51:24.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.360923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:24.368507 kernel: hv_vmbus: registering driver hv_balloon Jan 23 18:51:24.369527 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:51:24.369581 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 18:51:24.373142 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 18:51:24.373603 systemd-networkd[2150]: lo: Link UP Jan 23 18:51:24.373611 systemd-networkd[2150]: lo: Gained carrier Jan 23 18:51:24.375004 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:51:24.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.378932 systemd[1]: Reached target network.target - Network. Jan 23 18:51:24.381144 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 18:51:24.381189 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 18:51:24.382794 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:51:24.383584 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:51:24.383594 systemd-networkd[2150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:51:24.387139 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:51:24.387402 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:51:24.388125 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:51:24.392031 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:51:24.396993 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:51:24.399510 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523403aa eth0: Data path switched to VF: enP30832s1 Jan 23 18:51:24.402311 systemd-networkd[2150]: enP30832s1: Link UP Jan 23 18:51:24.402858 systemd-networkd[2150]: eth0: Link UP Jan 23 18:51:24.402863 systemd-networkd[2150]: eth0: Gained carrier Jan 23 18:51:24.402880 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:51:24.406809 systemd-networkd[2150]: enP30832s1: Gained carrier Jan 23 18:51:24.415558 systemd-networkd[2150]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:51:24.435017 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:51:24.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.540300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:24.571753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:24.571981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:24.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.576644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:24.589470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:24.589696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:24.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:24.599614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:24.638513 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 18:51:24.657655 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 18:51:24.699935 kernel: loop5: detected capacity change from 0 to 27728 Jan 23 18:51:24.710499 kernel: loop6: detected capacity change from 0 to 50784 Jan 23 18:51:24.713623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:51:24.716425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:51:24.724546 kernel: loop7: detected capacity change from 0 to 111560 Jan 23 18:51:24.736509 kernel: loop1: detected capacity change from 0 to 224512 Jan 23 18:51:24.747887 (sd-merge)[2230]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 23 18:51:24.751224 (sd-merge)[2230]: Merged extensions into '/usr'. Jan 23 18:51:24.754268 systemd[1]: Reload requested from client PID 2073 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:51:24.754280 systemd[1]: Reloading... Jan 23 18:51:24.813561 zram_generator::config[2267]: No configuration found. Jan 23 18:51:25.019812 systemd[1]: Reloading finished in 265 ms. Jan 23 18:51:25.042529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:51:25.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.044696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:25.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.048652 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:51:25.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.057282 systemd[1]: Starting ensure-sysext.service... Jan 23 18:51:25.061601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:51:25.064000 audit: BPF prog-id=31 op=LOAD Jan 23 18:51:25.064000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:51:25.064000 audit: BPF prog-id=32 op=LOAD Jan 23 18:51:25.064000 audit: BPF prog-id=33 op=LOAD Jan 23 18:51:25.064000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:51:25.064000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:51:25.065000 audit: BPF prog-id=34 op=LOAD Jan 23 18:51:25.065000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:51:25.065000 audit: BPF prog-id=35 op=LOAD Jan 23 18:51:25.065000 audit: BPF prog-id=36 op=LOAD Jan 23 18:51:25.065000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:51:25.065000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:51:25.066000 audit: BPF prog-id=37 op=LOAD Jan 23 18:51:25.066000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:51:25.066000 audit: BPF prog-id=38 op=LOAD Jan 23 18:51:25.066000 audit: BPF prog-id=39 op=LOAD Jan 23 18:51:25.066000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:51:25.066000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:51:25.067000 audit: BPF prog-id=40 op=LOAD Jan 23 18:51:25.073000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:51:25.073000 audit: BPF prog-id=41 op=LOAD Jan 23 18:51:25.073000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:51:25.073000 audit: BPF prog-id=42 op=LOAD Jan 23 18:51:25.073000 audit: BPF prog-id=43 op=LOAD Jan 23 18:51:25.073000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:51:25.073000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:51:25.074000 audit: BPF prog-id=44 op=LOAD Jan 23 18:51:25.074000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:51:25.074000 audit: BPF prog-id=45 op=LOAD Jan 23 18:51:25.074000 audit: BPF prog-id=46 op=LOAD Jan 23 18:51:25.074000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:51:25.074000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:51:25.081218 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:51:25.081238 systemd[1]: Reloading... Jan 23 18:51:25.086077 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:51:25.086102 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:51:25.086333 systemd-tmpfiles[2328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:51:25.087401 systemd-tmpfiles[2328]: ACLs are not supported, ignoring. Jan 23 18:51:25.087454 systemd-tmpfiles[2328]: ACLs are not supported, ignoring. Jan 23 18:51:25.105253 systemd-tmpfiles[2328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:51:25.105415 systemd-tmpfiles[2328]: Skipping /boot Jan 23 18:51:25.113770 systemd-tmpfiles[2328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:51:25.113874 systemd-tmpfiles[2328]: Skipping /boot Jan 23 18:51:25.148551 zram_generator::config[2362]: No configuration found. Jan 23 18:51:25.331788 systemd[1]: Reloading finished in 250 ms. Jan 23 18:51:25.345000 audit: BPF prog-id=47 op=LOAD Jan 23 18:51:25.345000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:51:25.345000 audit: BPF prog-id=48 op=LOAD Jan 23 18:51:25.345000 audit: BPF prog-id=49 op=LOAD Jan 23 18:51:25.345000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:51:25.346000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:51:25.346000 audit: BPF prog-id=50 op=LOAD Jan 23 18:51:25.346000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:51:25.346000 audit: BPF prog-id=51 op=LOAD Jan 23 18:51:25.346000 audit: BPF prog-id=52 op=LOAD Jan 23 18:51:25.346000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:51:25.346000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:51:25.347000 audit: BPF prog-id=53 op=LOAD Jan 23 18:51:25.347000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:51:25.347000 audit: BPF prog-id=54 op=LOAD Jan 23 18:51:25.347000 audit: BPF prog-id=55 op=LOAD Jan 23 18:51:25.347000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:51:25.347000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:51:25.349000 audit: BPF prog-id=56 op=LOAD Jan 23 18:51:25.352000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:51:25.352000 audit: BPF prog-id=57 op=LOAD Jan 23 18:51:25.353000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:51:25.353000 audit: BPF prog-id=58 op=LOAD Jan 23 18:51:25.353000 audit: BPF prog-id=59 op=LOAD Jan 23 18:51:25.353000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:51:25.353000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:51:25.353000 audit: BPF prog-id=60 op=LOAD Jan 23 18:51:25.353000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:51:25.354000 audit: BPF prog-id=61 op=LOAD Jan 23 18:51:25.354000 audit: BPF prog-id=62 op=LOAD Jan 23 18:51:25.354000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:51:25.354000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:51:25.356553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:25.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.365002 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:51:25.368543 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:51:25.374445 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:51:25.379729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:51:25.384732 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:51:25.390701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.390854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:25.391805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:25.396688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:25.407000 audit[2426]: SYSTEM_BOOT pid=2426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.409361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:51:25.412656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:25.413544 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:51:25.413649 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:25.413743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.417594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:25.417802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:25.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.419921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:25.423623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:25.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.428322 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:51:25.428503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:51:25.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.439283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.439749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:25.441699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:25.447949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:25.456174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:51:25.458301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:25.458469 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:51:25.458585 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:25.458679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.460238 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:51:25.462874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:25.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.463051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:25.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.465987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:25.466153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:25.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.476025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.476260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:25.477216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:25.480679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:51:25.486416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:25.488759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:25.489830 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:51:25.489931 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:25.490080 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:51:25.492000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:25.493534 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:51:25.493852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:51:25.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.496898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:25.497694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:25.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.500839 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:51:25.500967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:51:25.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.503323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:25.503539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:25.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.508508 systemd[1]: Finished ensure-sysext.service. Jan 23 18:51:25.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.512185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:51:25.512233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:51:25.534783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:51:25.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:51:25.703000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:51:25.703000 audit[2467]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ce42080 a2=420 a3=0 items=0 ppid=2422 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:51:25.703000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:51:25.704545 augenrules[2467]: No rules Jan 23 18:51:25.704893 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:51:25.705135 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:51:25.957613 systemd-networkd[2150]: eth0: Gained IPv6LL Jan 23 18:51:25.959601 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:51:25.962735 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:51:25.976897 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:51:25.978896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:51:31.922535 ldconfig[2424]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:51:31.931605 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:51:31.934745 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:51:31.951272 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:51:31.955764 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:51:31.958646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:51:31.961562 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:51:31.963338 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:51:31.966633 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:51:31.969587 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:51:31.972561 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:51:31.974218 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:51:31.975543 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:51:31.978529 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:51:31.978558 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:51:31.981532 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:51:31.997080 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:51:32.001500 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:51:32.006095 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:51:32.007887 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:51:32.009824 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:51:32.012733 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:51:32.014592 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:51:32.018117 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:51:32.020315 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:51:32.021577 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:51:32.024576 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:51:32.024601 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:51:32.026154 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:51:32.028151 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:51:32.032609 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:51:32.039643 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:51:32.043671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:51:32.046646 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:51:32.050385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:51:32.053568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:51:32.054567 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:51:32.058323 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 18:51:32.062641 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 18:51:32.065668 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 18:51:32.068946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:51:32.074593 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:51:32.077915 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:51:32.081410 KVP[2491]: KVP starting; pid is:2491 Jan 23 18:51:32.083634 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:51:32.089731 jq[2485]: false Jan 23 18:51:32.090629 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:51:32.094651 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:51:32.095155 KVP[2491]: KVP LIC Version: 3.1 Jan 23 18:51:32.095547 kernel: hv_utils: KVP IC version 4.0 Jan 23 18:51:32.108258 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:51:32.110128 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:51:32.110574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:51:32.116404 extend-filesystems[2486]: Found /dev/nvme0n1p6 Jan 23 18:51:32.114179 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:51:32.118746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:51:32.124949 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:51:32.125162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:51:32.126219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:51:32.126421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:51:32.132555 extend-filesystems[2486]: Found /dev/nvme0n1p9 Jan 23 18:51:32.137916 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:51:32.155669 jq[2504]: true Jan 23 18:51:32.158875 extend-filesystems[2486]: Checking size of /dev/nvme0n1p9 Jan 23 18:51:32.164066 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Refreshing passwd entry cache Jan 23 18:51:32.162673 oslogin_cache_refresh[2490]: Refreshing passwd entry cache Jan 23 18:51:32.174299 jq[2517]: true Jan 23 18:51:32.187004 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Failure getting users, quitting Jan 23 18:51:32.187059 oslogin_cache_refresh[2490]: Failure getting users, quitting Jan 23 18:51:32.187144 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:51:32.187171 oslogin_cache_refresh[2490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:51:32.187243 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Refreshing group entry cache Jan 23 18:51:32.187270 oslogin_cache_refresh[2490]: Refreshing group entry cache Jan 23 18:51:32.199620 chronyd[2480]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:51:32.201897 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Failure getting groups, quitting Jan 23 18:51:32.201897 google_oslogin_nss_cache[2490]: oslogin_cache_refresh[2490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:51:32.200436 oslogin_cache_refresh[2490]: Failure getting groups, quitting Jan 23 18:51:32.200445 oslogin_cache_refresh[2490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:51:32.203789 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:51:32.204043 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:51:32.214880 chronyd[2480]: Timezone right/UTC failed leap second check, ignoring Jan 23 18:51:32.215153 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:51:32.215026 chronyd[2480]: Loaded seccomp filter (level 2) Jan 23 18:51:32.231810 extend-filesystems[2486]: Resized partition /dev/nvme0n1p9 Jan 23 18:51:32.248316 update_engine[2502]: I20260123 18:51:32.247946 2502 main.cc:92] Flatcar Update Engine starting Jan 23 18:51:32.251699 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:51:32.251931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:51:32.260196 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:51:32.265422 tar[2509]: linux-amd64/LICENSE Jan 23 18:51:32.266524 tar[2509]: linux-amd64/helm Jan 23 18:51:32.282847 extend-filesystems[2562]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:51:32.315412 systemd-logind[2501]: New seat seat0. Jan 23 18:51:32.319362 systemd-logind[2501]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 18:51:32.319601 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:51:32.324555 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 23 18:51:32.326496 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 23 18:51:32.359180 extend-filesystems[2562]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 18:51:32.359180 extend-filesystems[2562]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 23 18:51:32.359180 extend-filesystems[2562]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 23 18:51:32.377435 extend-filesystems[2486]: Resized filesystem in /dev/nvme0n1p9 Jan 23 18:51:32.385503 bash[2549]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:51:32.364913 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:51:32.365230 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:51:32.376923 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:51:32.381097 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:51:32.448900 dbus-daemon[2483]: [system] SELinux support is enabled Jan 23 18:51:32.449106 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:51:32.455601 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:51:32.455749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:51:32.458517 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:51:32.458544 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:51:32.472110 dbus-daemon[2483]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:51:32.475448 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:51:32.477582 update_engine[2502]: I20260123 18:51:32.476158 2502 update_check_scheduler.cc:74] Next update check in 3m2s Jan 23 18:51:32.502646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:51:32.607995 coreos-metadata[2482]: Jan 23 18:51:32.607 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:51:32.618738 coreos-metadata[2482]: Jan 23 18:51:32.618 INFO Fetch successful Jan 23 18:51:32.618806 coreos-metadata[2482]: Jan 23 18:51:32.618 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 18:51:32.622331 coreos-metadata[2482]: Jan 23 18:51:32.622 INFO Fetch successful Jan 23 18:51:32.622399 coreos-metadata[2482]: Jan 23 18:51:32.622 INFO Fetching http://168.63.129.16/machine/4a934a20-8062-4b5c-a0f9-a3f6763f6c0c/25c4856f%2De439%2D4429%2D87e0%2Ddc46c339c57d.%5Fci%2D4547.1.0%2Da%2D90f1f3b2aa?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 18:51:32.630165 coreos-metadata[2482]: Jan 23 18:51:32.629 INFO Fetch successful Jan 23 18:51:32.630165 coreos-metadata[2482]: Jan 23 18:51:32.629 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:51:32.643029 coreos-metadata[2482]: Jan 23 18:51:32.641 INFO Fetch successful Jan 23 18:51:32.742748 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:51:32.753404 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:51:32.801219 sshd_keygen[2531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:51:32.861222 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:51:32.868197 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:51:32.872024 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 18:51:32.882471 locksmithd[2594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:51:32.907267 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:51:32.907918 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:51:32.913794 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:51:32.931913 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 18:51:32.937551 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:51:32.947851 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:51:32.954802 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:51:32.959184 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:51:32.997428 tar[2509]: linux-amd64/README.md Jan 23 18:51:33.016155 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:51:33.442164 containerd[2555]: time="2026-01-23T18:51:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:51:33.444130 containerd[2555]: time="2026-01-23T18:51:33.443766247Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:51:33.456381 containerd[2555]: time="2026-01-23T18:51:33.456334176Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.62µs" Jan 23 18:51:33.456497 containerd[2555]: time="2026-01-23T18:51:33.456469289Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:51:33.456576 containerd[2555]: time="2026-01-23T18:51:33.456566139Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:51:33.456618 containerd[2555]: time="2026-01-23T18:51:33.456610247Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:51:33.456762 containerd[2555]: time="2026-01-23T18:51:33.456753670Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:51:33.456800 containerd[2555]: time="2026-01-23T18:51:33.456792766Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:51:33.456892 containerd[2555]: time="2026-01-23T18:51:33.456880104Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:51:33.456986 containerd[2555]: time="2026-01-23T18:51:33.456976148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457266 containerd[2555]: time="2026-01-23T18:51:33.457249103Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457319 containerd[2555]: time="2026-01-23T18:51:33.457310669Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457375 containerd[2555]: time="2026-01-23T18:51:33.457366166Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457409 containerd[2555]: time="2026-01-23T18:51:33.457401937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457603 containerd[2555]: time="2026-01-23T18:51:33.457590432Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457902 containerd[2555]: time="2026-01-23T18:51:33.457636924Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457902 containerd[2555]: time="2026-01-23T18:51:33.457700379Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457902 containerd[2555]: time="2026-01-23T18:51:33.457848400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457902 containerd[2555]: time="2026-01-23T18:51:33.457869650Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:51:33.457902 containerd[2555]: time="2026-01-23T18:51:33.457878919Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:51:33.458040 containerd[2555]: time="2026-01-23T18:51:33.458030225Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:51:33.458328 containerd[2555]: time="2026-01-23T18:51:33.458319433Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:51:33.458441 containerd[2555]: time="2026-01-23T18:51:33.458409100Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:51:33.471722 containerd[2555]: time="2026-01-23T18:51:33.471548468Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:51:33.471722 containerd[2555]: time="2026-01-23T18:51:33.471599461Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:51:33.472582 containerd[2555]: time="2026-01-23T18:51:33.472512095Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:51:33.472582 containerd[2555]: time="2026-01-23T18:51:33.472540127Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:51:33.473041 containerd[2555]: time="2026-01-23T18:51:33.472556601Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:51:33.473123 containerd[2555]: time="2026-01-23T18:51:33.473108443Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:51:33.473173 containerd[2555]: time="2026-01-23T18:51:33.473164059Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:51:33.473241 containerd[2555]: time="2026-01-23T18:51:33.473203079Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:51:33.473326 containerd[2555]: time="2026-01-23T18:51:33.473221879Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:51:33.473379 containerd[2555]: time="2026-01-23T18:51:33.473362956Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.473507545Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.473902100Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.473916075Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.473928260Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474035175Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474052071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474066770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474083718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474095938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474106225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474117633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474128742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474140012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474150247Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:51:33.474957 containerd[2555]: time="2026-01-23T18:51:33.474159473Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:51:33.475329 containerd[2555]: time="2026-01-23T18:51:33.474179518Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:51:33.475329 containerd[2555]: time="2026-01-23T18:51:33.474223921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:51:33.475329 containerd[2555]: time="2026-01-23T18:51:33.474236615Z" level=info msg="Start snapshots syncer" Jan 23 18:51:33.475329 containerd[2555]: time="2026-01-23T18:51:33.474252501Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474516068Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474570605Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474606905Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474681445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474696813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474710583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474721102Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474733355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474743310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474753579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474764295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474777944Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474798967Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474810188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474818921Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474827769Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474835711Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474845215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474854639Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474868663Z" level=info msg="runtime interface created" Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474874111Z" level=info msg="created NRI interface" Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474881495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474891294Z" level=info msg="Connect containerd service" Jan 23 18:51:33.475415 containerd[2555]: time="2026-01-23T18:51:33.474911356Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:51:33.478303 containerd[2555]: time="2026-01-23T18:51:33.477873855Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:51:33.517041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:51:33.527063 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:51:34.012066 kubelet[2648]: E0123 18:51:34.012013 2648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:51:34.013994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:34.014132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:51:34.014510 systemd[1]: kubelet.service: Consumed 921ms CPU time, 264.2M memory peak. Jan 23 18:51:34.039803 containerd[2555]: time="2026-01-23T18:51:34.039755251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:51:34.039874 containerd[2555]: time="2026-01-23T18:51:34.039808722Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:51:34.039874 containerd[2555]: time="2026-01-23T18:51:34.039822057Z" level=info msg="Start subscribing containerd event" Jan 23 18:51:34.039874 containerd[2555]: time="2026-01-23T18:51:34.039848490Z" level=info msg="Start recovering state" Jan 23 18:51:34.039940 containerd[2555]: time="2026-01-23T18:51:34.039928536Z" level=info msg="Start event monitor" Jan 23 18:51:34.039966 containerd[2555]: time="2026-01-23T18:51:34.039941196Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:51:34.039966 containerd[2555]: time="2026-01-23T18:51:34.039955924Z" level=info msg="Start streaming server" Jan 23 18:51:34.040005 containerd[2555]: time="2026-01-23T18:51:34.039964879Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:51:34.040005 containerd[2555]: time="2026-01-23T18:51:34.039972811Z" level=info msg="runtime interface starting up..." Jan 23 18:51:34.040005 containerd[2555]: time="2026-01-23T18:51:34.039979410Z" level=info msg="starting plugins..." Jan 23 18:51:34.040005 containerd[2555]: time="2026-01-23T18:51:34.039991218Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:51:34.040105 containerd[2555]: time="2026-01-23T18:51:34.040092964Z" level=info msg="containerd successfully booted in 0.598420s" Jan 23 18:51:34.040264 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:51:34.043255 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:51:34.047627 systemd[1]: Startup finished in 4.529s (kernel) + 10.844s (initrd) + 14.201s (userspace) = 29.574s. Jan 23 18:51:34.344283 login[2634]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:34.358977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:51:34.359866 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:51:34.365305 systemd-logind[2501]: New session 1 of user core. Jan 23 18:51:34.414040 login[2635]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:34.418255 systemd-logind[2501]: New session 2 of user core. Jan 23 18:51:34.433183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:51:34.435284 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:51:34.447562 (systemd)[2673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:34.449407 systemd-logind[2501]: New session 3 of user core. Jan 23 18:51:34.584742 systemd[2673]: Queued start job for default target default.target. Jan 23 18:51:34.595446 systemd[2673]: Created slice app.slice - User Application Slice. Jan 23 18:51:34.595496 systemd[2673]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:51:34.595512 systemd[2673]: Reached target paths.target - Paths. Jan 23 18:51:34.595548 systemd[2673]: Reached target timers.target - Timers. Jan 23 18:51:34.596418 systemd[2673]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:51:34.599630 systemd[2673]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:51:34.608458 systemd[2673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:51:34.608568 systemd[2673]: Reached target sockets.target - Sockets. Jan 23 18:51:34.611375 systemd[2673]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:51:34.611444 systemd[2673]: Reached target basic.target - Basic System. Jan 23 18:51:34.611814 systemd[2673]: Reached target default.target - Main User Target. Jan 23 18:51:34.611848 systemd[2673]: Startup finished in 158ms. Jan 23 18:51:34.612091 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:51:34.616895 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:51:34.617632 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:51:34.727775 waagent[2632]: 2026-01-23T18:51:34.727708Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 18:51:34.729439 waagent[2632]: 2026-01-23T18:51:34.729356Z INFO Daemon Daemon OS: flatcar 4547.1.0 Jan 23 18:51:34.730689 waagent[2632]: 2026-01-23T18:51:34.730581Z INFO Daemon Daemon Python: 3.11.13 Jan 23 18:51:34.732000 waagent[2632]: 2026-01-23T18:51:34.731871Z INFO Daemon Daemon Run daemon Jan 23 18:51:34.733228 waagent[2632]: 2026-01-23T18:51:34.733187Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.1.0' Jan 23 18:51:34.735588 waagent[2632]: 2026-01-23T18:51:34.735011Z INFO Daemon Daemon Using waagent for provisioning Jan 23 18:51:34.736916 waagent[2632]: 2026-01-23T18:51:34.736885Z INFO Daemon Daemon Activate resource disk Jan 23 18:51:34.738106 waagent[2632]: 2026-01-23T18:51:34.738045Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 18:51:34.741264 waagent[2632]: 2026-01-23T18:51:34.741223Z INFO Daemon Daemon Found device: None Jan 23 18:51:34.742684 waagent[2632]: 2026-01-23T18:51:34.742553Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 18:51:34.743296 waagent[2632]: 2026-01-23T18:51:34.743263Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 18:51:34.747409 waagent[2632]: 2026-01-23T18:51:34.747362Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:51:34.749018 waagent[2632]: 2026-01-23T18:51:34.748984Z INFO Daemon Daemon Running default provisioning handler Jan 23 18:51:34.755974 waagent[2632]: 2026-01-23T18:51:34.755183Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 18:51:34.756465 waagent[2632]: 2026-01-23T18:51:34.756430Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 18:51:34.756775 waagent[2632]: 2026-01-23T18:51:34.756753Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 18:51:34.757065 waagent[2632]: 2026-01-23T18:51:34.757047Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 18:51:34.850755 waagent[2632]: 2026-01-23T18:51:34.850674Z INFO Daemon Daemon Successfully mounted dvd Jan 23 18:51:34.873761 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 18:51:34.875756 waagent[2632]: 2026-01-23T18:51:34.875716Z INFO Daemon Daemon Detect protocol endpoint Jan 23 18:51:34.876981 waagent[2632]: 2026-01-23T18:51:34.876272Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:51:34.878619 waagent[2632]: 2026-01-23T18:51:34.878590Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 18:51:34.880365 waagent[2632]: 2026-01-23T18:51:34.879370Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 18:51:34.881915 waagent[2632]: 2026-01-23T18:51:34.881879Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 18:51:34.883195 waagent[2632]: 2026-01-23T18:51:34.883165Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 18:51:34.897758 waagent[2632]: 2026-01-23T18:51:34.897722Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 18:51:34.899500 waagent[2632]: 2026-01-23T18:51:34.898381Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 18:51:34.899500 waagent[2632]: 2026-01-23T18:51:34.898625Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 18:51:35.035380 waagent[2632]: 2026-01-23T18:51:35.035290Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 18:51:35.037036 waagent[2632]: 2026-01-23T18:51:35.036067Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 18:51:35.050564 waagent[2632]: 2026-01-23T18:51:35.050528Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:51:35.064017 waagent[2632]: 2026-01-23T18:51:35.063984Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 18:51:35.065241 waagent[2632]: 2026-01-23T18:51:35.064847Z INFO Daemon Jan 23 18:51:35.065241 waagent[2632]: 2026-01-23T18:51:35.064942Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c1f220ef-33e9-492d-9749-60f28db209bc eTag: 10195775957640620489 source: Fabric] Jan 23 18:51:35.065241 waagent[2632]: 2026-01-23T18:51:35.065464Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 18:51:35.065241 waagent[2632]: 2026-01-23T18:51:35.065736Z INFO Daemon Jan 23 18:51:35.065241 waagent[2632]: 2026-01-23T18:51:35.065949Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:51:35.073505 waagent[2632]: 2026-01-23T18:51:35.072197Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 18:51:35.174080 waagent[2632]: 2026-01-23T18:51:35.173991Z INFO Daemon Downloaded certificate {'thumbprint': 'DB4942BE721893C2DEDD11D7F902A5E0B5D8FB39', 'hasPrivateKey': True} Jan 23 18:51:35.176817 waagent[2632]: 2026-01-23T18:51:35.176781Z INFO Daemon Fetch goal state completed Jan 23 18:51:35.183957 waagent[2632]: 2026-01-23T18:51:35.183924Z INFO Daemon Daemon Starting provisioning Jan 23 18:51:35.185161 waagent[2632]: 2026-01-23T18:51:35.184499Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 18:51:35.186059 waagent[2632]: 2026-01-23T18:51:35.185915Z INFO Daemon Daemon Set hostname [ci-4547.1.0-a-90f1f3b2aa] Jan 23 18:51:35.202946 waagent[2632]: 2026-01-23T18:51:35.202899Z INFO Daemon Daemon Publish hostname [ci-4547.1.0-a-90f1f3b2aa] Jan 23 18:51:35.203813 waagent[2632]: 2026-01-23T18:51:35.203600Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 18:51:35.203813 waagent[2632]: 2026-01-23T18:51:35.203946Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 18:51:35.211548 systemd-networkd[2150]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:51:35.211556 systemd-networkd[2150]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:51:35.211619 systemd-networkd[2150]: eth0: DHCP lease lost Jan 23 18:51:35.229332 waagent[2632]: 2026-01-23T18:51:35.228415Z INFO Daemon Daemon Create user account if not exists Jan 23 18:51:35.229332 waagent[2632]: 2026-01-23T18:51:35.228925Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 18:51:35.229332 waagent[2632]: 2026-01-23T18:51:35.229011Z INFO Daemon Daemon Configure sudoer Jan 23 18:51:35.233513 systemd-networkd[2150]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:51:35.235807 waagent[2632]: 2026-01-23T18:51:35.235765Z INFO Daemon Daemon Configure sshd Jan 23 18:51:35.240755 waagent[2632]: 2026-01-23T18:51:35.240716Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 18:51:35.243154 waagent[2632]: 2026-01-23T18:51:35.242107Z INFO Daemon Daemon Deploy ssh public key. Jan 23 18:51:36.376198 waagent[2632]: 2026-01-23T18:51:36.376143Z INFO Daemon Daemon Provisioning complete Jan 23 18:51:36.386034 waagent[2632]: 2026-01-23T18:51:36.385999Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 18:51:36.386760 waagent[2632]: 2026-01-23T18:51:36.386550Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 18:51:36.386760 waagent[2632]: 2026-01-23T18:51:36.386799Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 18:51:36.487780 waagent[2726]: 2026-01-23T18:51:36.487718Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 18:51:36.488041 waagent[2726]: 2026-01-23T18:51:36.487816Z INFO ExtHandler ExtHandler OS: flatcar 4547.1.0 Jan 23 18:51:36.488041 waagent[2726]: 2026-01-23T18:51:36.487857Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 18:51:36.488041 waagent[2726]: 2026-01-23T18:51:36.487896Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 18:51:36.518733 waagent[2726]: 2026-01-23T18:51:36.518686Z INFO ExtHandler ExtHandler Distro: flatcar-4547.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 18:51:36.518863 waagent[2726]: 2026-01-23T18:51:36.518837Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:51:36.518926 waagent[2726]: 2026-01-23T18:51:36.518889Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:51:36.525362 waagent[2726]: 2026-01-23T18:51:36.525307Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:51:36.538176 waagent[2726]: 2026-01-23T18:51:36.538145Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 18:51:36.538512 waagent[2726]: 2026-01-23T18:51:36.538468Z INFO ExtHandler Jan 23 18:51:36.538569 waagent[2726]: 2026-01-23T18:51:36.538543Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9e176fee-732a-4468-94d3-d20908821f7e eTag: 10195775957640620489 source: Fabric] Jan 23 18:51:36.538772 waagent[2726]: 2026-01-23T18:51:36.538740Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:51:36.539088 waagent[2726]: 2026-01-23T18:51:36.539060Z INFO ExtHandler Jan 23 18:51:36.539127 waagent[2726]: 2026-01-23T18:51:36.539100Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:51:36.542673 waagent[2726]: 2026-01-23T18:51:36.542646Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:51:36.612069 waagent[2726]: 2026-01-23T18:51:36.612019Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DB4942BE721893C2DEDD11D7F902A5E0B5D8FB39', 'hasPrivateKey': True} Jan 23 18:51:36.612389 waagent[2726]: 2026-01-23T18:51:36.612360Z INFO ExtHandler Fetch goal state completed Jan 23 18:51:36.622976 waagent[2726]: 2026-01-23T18:51:36.622931Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.5-dev (Library: OpenSSL 3.5.5-dev ) Jan 23 18:51:36.627105 waagent[2726]: 2026-01-23T18:51:36.627026Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2726 Jan 23 18:51:36.627167 waagent[2726]: 2026-01-23T18:51:36.627144Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 18:51:36.627406 waagent[2726]: 2026-01-23T18:51:36.627381Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 18:51:36.628411 waagent[2726]: 2026-01-23T18:51:36.628375Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 18:51:36.628765 waagent[2726]: 2026-01-23T18:51:36.628739Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 18:51:36.628867 waagent[2726]: 2026-01-23T18:51:36.628845Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 18:51:36.629237 waagent[2726]: 2026-01-23T18:51:36.629215Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 18:51:36.655872 waagent[2726]: 2026-01-23T18:51:36.655848Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 18:51:36.656002 waagent[2726]: 2026-01-23T18:51:36.655981Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 18:51:36.661290 waagent[2726]: 2026-01-23T18:51:36.661122Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 18:51:36.666189 systemd[1]: Reload requested from client PID 2741 ('systemctl') (unit waagent.service)... Jan 23 18:51:36.666204 systemd[1]: Reloading... Jan 23 18:51:36.748533 zram_generator::config[2783]: No configuration found. Jan 23 18:51:36.905498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 23 18:51:36.926376 systemd[1]: Reloading finished in 259 ms. Jan 23 18:51:36.945415 waagent[2726]: 2026-01-23T18:51:36.944688Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 18:51:36.945415 waagent[2726]: 2026-01-23T18:51:36.944827Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 18:51:37.507241 waagent[2726]: 2026-01-23T18:51:37.507171Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 18:51:37.507578 waagent[2726]: 2026-01-23T18:51:37.507540Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 18:51:37.508285 waagent[2726]: 2026-01-23T18:51:37.508248Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:51:37.508441 waagent[2726]: 2026-01-23T18:51:37.508285Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 18:51:37.508441 waagent[2726]: 2026-01-23T18:51:37.508414Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:51:37.508627 waagent[2726]: 2026-01-23T18:51:37.508604Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 18:51:37.508968 waagent[2726]: 2026-01-23T18:51:37.508899Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 18:51:37.509041 waagent[2726]: 2026-01-23T18:51:37.509012Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:51:37.509231 waagent[2726]: 2026-01-23T18:51:37.509203Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 18:51:37.509350 waagent[2726]: 2026-01-23T18:51:37.509322Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 18:51:37.509622 waagent[2726]: 2026-01-23T18:51:37.509471Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:51:37.509622 waagent[2726]: 2026-01-23T18:51:37.509588Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 18:51:37.509622 waagent[2726]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 18:51:37.509622 waagent[2726]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 18:51:37.509622 waagent[2726]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 18:51:37.509622 waagent[2726]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:51:37.509622 waagent[2726]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:51:37.509622 waagent[2726]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:51:37.509836 waagent[2726]: 2026-01-23T18:51:37.509795Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 18:51:37.509920 waagent[2726]: 2026-01-23T18:51:37.509898Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 18:51:37.510283 waagent[2726]: 2026-01-23T18:51:37.510261Z INFO EnvHandler ExtHandler Configure routes Jan 23 18:51:37.510595 waagent[2726]: 2026-01-23T18:51:37.510566Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 18:51:37.510665 waagent[2726]: 2026-01-23T18:51:37.510643Z INFO EnvHandler ExtHandler Gateway:None Jan 23 18:51:37.510709 waagent[2726]: 2026-01-23T18:51:37.510692Z INFO EnvHandler ExtHandler Routes:None Jan 23 18:51:37.533836 waagent[2726]: 2026-01-23T18:51:37.533796Z INFO ExtHandler ExtHandler Jan 23 18:51:37.533905 waagent[2726]: 2026-01-23T18:51:37.533855Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1fb21fe4-127d-461f-9001-b35f287391d0 correlation 53a62e73-5972-4485-a199-608bcd8f9bfd created: 2026-01-23T18:50:45.006004Z] Jan 23 18:51:37.534152 waagent[2726]: 2026-01-23T18:51:37.534125Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:51:37.534567 waagent[2726]: 2026-01-23T18:51:37.534542Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 18:51:37.562339 waagent[2726]: 2026-01-23T18:51:37.562297Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 18:51:37.562339 waagent[2726]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 18:51:37.562703 waagent[2726]: 2026-01-23T18:51:37.562666Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0C3AB260-4A23-4AF8-B4FC-B2F31FAD1794;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 18:51:37.591621 waagent[2726]: 2026-01-23T18:51:37.591572Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 18:51:37.591621 waagent[2726]: Executing ['ip', '-a', '-o', 'link']: Jan 23 18:51:37.591621 waagent[2726]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 18:51:37.591621 waagent[2726]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:03:aa brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e523403aa Jan 23 18:51:37.591621 waagent[2726]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:03:aa brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 18:51:37.591621 waagent[2726]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 18:51:37.591621 waagent[2726]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 18:51:37.591621 waagent[2726]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 18:51:37.591621 waagent[2726]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 18:51:37.591621 waagent[2726]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 18:51:37.591621 waagent[2726]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:3aa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 18:51:37.619988 waagent[2726]: 2026-01-23T18:51:37.619941Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 18:51:37.619988 waagent[2726]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.619988 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.619988 waagent[2726]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.619988 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.619988 waagent[2726]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.619988 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.619988 waagent[2726]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:51:37.619988 waagent[2726]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:51:37.619988 waagent[2726]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:51:37.622672 waagent[2726]: 2026-01-23T18:51:37.622624Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 18:51:37.622672 waagent[2726]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.622672 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.622672 waagent[2726]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.622672 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.622672 waagent[2726]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:51:37.622672 waagent[2726]: pkts bytes target prot opt in out source destination Jan 23 18:51:37.622672 waagent[2726]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:51:37.622672 waagent[2726]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:51:37.622672 waagent[2726]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:51:44.264973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:51:44.266439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:51:44.699461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:51:44.702598 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:51:44.737822 kubelet[2881]: E0123 18:51:44.737788 2881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:51:44.740669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:44.740798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:51:44.741130 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.7M memory peak. Jan 23 18:51:54.757126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:51:54.758590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:51:55.219633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:51:55.225663 (kubelet)[2896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:51:55.262752 kubelet[2896]: E0123 18:51:55.262728 2896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:51:55.264382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:55.264498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:51:55.264944 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.7M memory peak. Jan 23 18:51:56.004425 chronyd[2480]: Selected source PHC0 Jan 23 18:52:01.339217 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:52:01.340314 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.16.10:60902.service - OpenSSH per-connection server daemon (10.200.16.10:60902). Jan 23 18:52:02.038275 sshd[2904]: Accepted publickey for core from 10.200.16.10 port 60902 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:02.039457 sshd-session[2904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:02.044204 systemd-logind[2501]: New session 4 of user core. Jan 23 18:52:02.050645 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:52:02.465115 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.16.10:60912.service - OpenSSH per-connection server daemon (10.200.16.10:60912). Jan 23 18:52:03.017536 sshd[2911]: Accepted publickey for core from 10.200.16.10 port 60912 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:03.018734 sshd-session[2911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:03.023143 systemd-logind[2501]: New session 5 of user core. Jan 23 18:52:03.028650 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:52:03.330494 sshd[2915]: Connection closed by 10.200.16.10 port 60912 Jan 23 18:52:03.331665 sshd-session[2911]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:03.334942 systemd-logind[2501]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:52:03.335397 systemd[1]: sshd@1-10.200.8.14:22-10.200.16.10:60912.service: Deactivated successfully. Jan 23 18:52:03.336901 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:52:03.338270 systemd-logind[2501]: Removed session 5. Jan 23 18:52:03.444924 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.16.10:60924.service - OpenSSH per-connection server daemon (10.200.16.10:60924). Jan 23 18:52:04.008421 sshd[2921]: Accepted publickey for core from 10.200.16.10 port 60924 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:04.008925 sshd-session[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:04.012587 systemd-logind[2501]: New session 6 of user core. Jan 23 18:52:04.017637 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:52:04.318340 sshd[2925]: Connection closed by 10.200.16.10 port 60924 Jan 23 18:52:04.320026 sshd-session[2921]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:04.323017 systemd[1]: sshd@2-10.200.8.14:22-10.200.16.10:60924.service: Deactivated successfully. Jan 23 18:52:04.324530 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:52:04.325186 systemd-logind[2501]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:52:04.326298 systemd-logind[2501]: Removed session 6. Jan 23 18:52:04.559951 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.16.10:60928.service - OpenSSH per-connection server daemon (10.200.16.10:60928). Jan 23 18:52:05.122519 sshd[2931]: Accepted publickey for core from 10.200.16.10 port 60928 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:05.123335 sshd-session[2931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:05.127601 systemd-logind[2501]: New session 7 of user core. Jan 23 18:52:05.132641 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:52:05.334134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:52:05.335647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:05.436882 sshd[2935]: Connection closed by 10.200.16.10 port 60928 Jan 23 18:52:05.437240 sshd-session[2931]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:05.440655 systemd-logind[2501]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:52:05.440876 systemd[1]: sshd@3-10.200.8.14:22-10.200.16.10:60928.service: Deactivated successfully. Jan 23 18:52:05.442251 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:52:05.443686 systemd-logind[2501]: Removed session 7. Jan 23 18:52:05.549871 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.16.10:60942.service - OpenSSH per-connection server daemon (10.200.16.10:60942). Jan 23 18:52:05.799464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:05.803764 (kubelet)[2952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:52:05.834130 kubelet[2952]: E0123 18:52:05.834079 2952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:52:05.835539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:52:05.835656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:52:05.835962 systemd[1]: kubelet.service: Consumed 122ms CPU time, 108M memory peak. Jan 23 18:52:06.106581 sshd[2944]: Accepted publickey for core from 10.200.16.10 port 60942 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:06.107012 sshd-session[2944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:06.111501 systemd-logind[2501]: New session 8 of user core. Jan 23 18:52:06.113647 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:52:06.500674 sudo[2961]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:52:06.500925 sudo[2961]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:06.529227 sudo[2961]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:06.632406 sshd[2960]: Connection closed by 10.200.16.10 port 60942 Jan 23 18:52:06.634025 sshd-session[2944]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:06.637277 systemd[1]: sshd@4-10.200.8.14:22-10.200.16.10:60942.service: Deactivated successfully. Jan 23 18:52:06.639124 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:52:06.639946 systemd-logind[2501]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:52:06.641224 systemd-logind[2501]: Removed session 8. Jan 23 18:52:06.766241 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.16.10:60952.service - OpenSSH per-connection server daemon (10.200.16.10:60952). Jan 23 18:52:07.325412 sshd[2968]: Accepted publickey for core from 10.200.16.10 port 60952 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:07.326603 sshd-session[2968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:07.330939 systemd-logind[2501]: New session 9 of user core. Jan 23 18:52:07.339651 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:52:07.537248 sudo[2974]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:52:07.537536 sudo[2974]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:07.541714 sudo[2974]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:07.546293 sudo[2973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:52:07.546587 sudo[2973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:07.552593 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:52:07.583581 kernel: kauditd_printk_skb: 162 callbacks suppressed Jan 23 18:52:07.583810 kernel: audit: type=1305 audit(1769194327.580:258): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:52:07.580000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:52:07.584262 augenrules[2998]: No rules Jan 23 18:52:07.585214 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:52:07.585431 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:52:07.580000 audit[2998]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe478ae100 a2=420 a3=0 items=0 ppid=2979 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:07.586826 sudo[2973]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:07.593330 kernel: audit: type=1300 audit(1769194327.580:258): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe478ae100 a2=420 a3=0 items=0 ppid=2979 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:07.593372 kernel: audit: type=1327 audit(1769194327.580:258): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:52:07.580000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:52:07.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.596280 kernel: audit: type=1130 audit(1769194327.584:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.599134 kernel: audit: type=1131 audit(1769194327.584:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.585000 audit[2973]: USER_END pid=2973 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.602435 kernel: audit: type=1106 audit(1769194327.585:261): pid=2973 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.585000 audit[2973]: CRED_DISP pid=2973 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.605299 kernel: audit: type=1104 audit(1769194327.585:262): pid=2973 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.691649 sshd[2972]: Connection closed by 10.200.16.10 port 60952 Jan 23 18:52:07.692629 sshd-session[2968]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:07.692000 audit[2968]: USER_END pid=2968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:07.695322 systemd[1]: sshd@5-10.200.8.14:22-10.200.16.10:60952.service: Deactivated successfully. Jan 23 18:52:07.697665 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:52:07.699377 systemd-logind[2501]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:52:07.702519 kernel: audit: type=1106 audit(1769194327.692:263): pid=2968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:07.702548 kernel: audit: type=1104 audit(1769194327.692:264): pid=2968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:07.692000 audit[2968]: CRED_DISP pid=2968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:07.701131 systemd-logind[2501]: Removed session 9. Jan 23 18:52:07.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.14:22-10.200.16.10:60952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.707805 kernel: audit: type=1131 audit(1769194327.694:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.14:22-10.200.16.10:60952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:60958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:07.812119 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.16.10:60958.service - OpenSSH per-connection server daemon (10.200.16.10:60958). Jan 23 18:52:08.375000 audit[3007]: USER_ACCT pid=3007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:08.376972 sshd[3007]: Accepted publickey for core from 10.200.16.10 port 60958 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:52:08.376000 audit[3007]: CRED_ACQ pid=3007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:08.376000 audit[3007]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffec3545a0 a2=3 a3=0 items=0 ppid=1 pid=3007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:08.376000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:52:08.378149 sshd-session[3007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:08.382473 systemd-logind[2501]: New session 10 of user core. Jan 23 18:52:08.388642 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:52:08.389000 audit[3007]: USER_START pid=3007 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:08.390000 audit[3011]: CRED_ACQ pid=3011 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:08.591000 audit[3012]: USER_ACCT pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:08.593138 sudo[3012]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:52:08.591000 audit[3012]: CRED_REFR pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:08.591000 audit[3012]: USER_START pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:08.593383 sudo[3012]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:10.529558 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:52:10.538748 (dockerd)[3031]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:52:11.856269 dockerd[3031]: time="2026-01-23T18:52:11.856014662Z" level=info msg="Starting up" Jan 23 18:52:11.858270 dockerd[3031]: time="2026-01-23T18:52:11.858243202Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:52:11.867290 dockerd[3031]: time="2026-01-23T18:52:11.867260874Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:52:11.943751 dockerd[3031]: time="2026-01-23T18:52:11.943720050Z" level=info msg="Loading containers: start." Jan 23 18:52:11.968502 kernel: Initializing XFRM netlink socket Jan 23 18:52:12.093000 audit[3078]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.093000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb10fe780 a2=0 a3=0 items=0 ppid=3031 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:52:12.095000 audit[3080]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.095000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff8d0eac20 a2=0 a3=0 items=0 ppid=3031 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:52:12.098000 audit[3082]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.098000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc71d7a180 a2=0 a3=0 items=0 ppid=3031 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:52:12.100000 audit[3084]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.100000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7b1abdf0 a2=0 a3=0 items=0 ppid=3031 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:52:12.101000 audit[3086]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.101000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6e92e600 a2=0 a3=0 items=0 ppid=3031 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:52:12.103000 audit[3088]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.103000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd6ad4c590 a2=0 a3=0 items=0 ppid=3031 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.103000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:52:12.105000 audit[3090]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.105000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffef9d35e60 a2=0 a3=0 items=0 ppid=3031 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:52:12.106000 audit[3092]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.106000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc55d01a20 a2=0 a3=0 items=0 ppid=3031 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:52:12.157000 audit[3095]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.157000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc3e040720 a2=0 a3=0 items=0 ppid=3031 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 18:52:12.158000 audit[3097]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.158000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeb1b1f170 a2=0 a3=0 items=0 ppid=3031 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:52:12.160000 audit[3099]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.160000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff5fa10ff0 a2=0 a3=0 items=0 ppid=3031 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:52:12.162000 audit[3101]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.162000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc3927d1d0 a2=0 a3=0 items=0 ppid=3031 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:52:12.164000 audit[3103]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.164000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff9e351820 a2=0 a3=0 items=0 ppid=3031 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:52:12.252000 audit[3133]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.252000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff6c54b2e0 a2=0 a3=0 items=0 ppid=3031 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:52:12.254000 audit[3135]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.254000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeb6d51010 a2=0 a3=0 items=0 ppid=3031 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:52:12.255000 audit[3137]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.255000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3018e3d0 a2=0 a3=0 items=0 ppid=3031 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:52:12.257000 audit[3139]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.257000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdce7edb80 a2=0 a3=0 items=0 ppid=3031 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:52:12.258000 audit[3141]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.258000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc18d8b4e0 a2=0 a3=0 items=0 ppid=3031 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:52:12.260000 audit[3143]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.260000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd36122270 a2=0 a3=0 items=0 ppid=3031 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:52:12.261000 audit[3145]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.261000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd10fe1bd0 a2=0 a3=0 items=0 ppid=3031 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:52:12.263000 audit[3147]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.263000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff877c7730 a2=0 a3=0 items=0 ppid=3031 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:52:12.265000 audit[3149]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.265000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffff71e7a90 a2=0 a3=0 items=0 ppid=3031 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 18:52:12.267000 audit[3151]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.267000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff5ec46880 a2=0 a3=0 items=0 ppid=3031 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:52:12.268000 audit[3153]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.268000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffde1a17e80 a2=0 a3=0 items=0 ppid=3031 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:52:12.270000 audit[3155]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.270000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffd2d29390 a2=0 a3=0 items=0 ppid=3031 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:52:12.271000 audit[3157]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.271000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc8045ec80 a2=0 a3=0 items=0 ppid=3031 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:52:12.276000 audit[3162]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.276000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc916378b0 a2=0 a3=0 items=0 ppid=3031 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:52:12.277000 audit[3164]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.277000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff0f32cf40 a2=0 a3=0 items=0 ppid=3031 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:52:12.279000 audit[3166]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.279000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe96054bc0 a2=0 a3=0 items=0 ppid=3031 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:52:12.280000 audit[3168]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.280000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdfa0c7130 a2=0 a3=0 items=0 ppid=3031 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:52:12.282000 audit[3170]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.282000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdcc27aa50 a2=0 a3=0 items=0 ppid=3031 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:52:12.284000 audit[3172]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:12.284000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc85033020 a2=0 a3=0 items=0 ppid=3031 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:52:12.320000 audit[3177]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.320000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc6524d0f0 a2=0 a3=0 items=0 ppid=3031 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 18:52:12.321000 audit[3179]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.321000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff4510d770 a2=0 a3=0 items=0 ppid=3031 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 18:52:12.328000 audit[3187]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.328000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff14c83c10 a2=0 a3=0 items=0 ppid=3031 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 18:52:12.332000 audit[3192]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.332000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffddfe3a2a0 a2=0 a3=0 items=0 ppid=3031 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 18:52:12.334000 audit[3194]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.334000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffb77d9490 a2=0 a3=0 items=0 ppid=3031 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 18:52:12.336000 audit[3196]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.336000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd16c2dc0 a2=0 a3=0 items=0 ppid=3031 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 18:52:12.338000 audit[3198]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.338000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc17f08360 a2=0 a3=0 items=0 ppid=3031 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:52:12.339000 audit[3200]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:12.339000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd3756df50 a2=0 a3=0 items=0 ppid=3031 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:12.339000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 18:52:12.341920 systemd-networkd[2150]: docker0: Link UP Jan 23 18:52:12.354591 dockerd[3031]: time="2026-01-23T18:52:12.354559682Z" level=info msg="Loading containers: done." Jan 23 18:52:12.365912 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3738731971-merged.mount: Deactivated successfully. Jan 23 18:52:12.420270 dockerd[3031]: time="2026-01-23T18:52:12.420232919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:52:12.420384 dockerd[3031]: time="2026-01-23T18:52:12.420303265Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:52:12.420384 dockerd[3031]: time="2026-01-23T18:52:12.420372698Z" level=info msg="Initializing buildkit" Jan 23 18:52:12.460369 dockerd[3031]: time="2026-01-23T18:52:12.460332013Z" level=info msg="Completed buildkit initialization" Jan 23 18:52:12.467422 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 18:52:12.467506 dockerd[3031]: time="2026-01-23T18:52:12.467467198Z" level=info msg="Daemon has completed initialization" Jan 23 18:52:12.467562 dockerd[3031]: time="2026-01-23T18:52:12.467524320Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:52:12.468186 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:52:12.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:13.461561 containerd[2555]: time="2026-01-23T18:52:13.461520718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 18:52:14.465867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963411028.mount: Deactivated successfully. Jan 23 18:52:15.390719 containerd[2555]: time="2026-01-23T18:52:15.390670880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:15.393123 containerd[2555]: time="2026-01-23T18:52:15.393000869Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 23 18:52:15.395566 containerd[2555]: time="2026-01-23T18:52:15.395529657Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:15.400571 containerd[2555]: time="2026-01-23T18:52:15.400547907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:15.401557 containerd[2555]: time="2026-01-23T18:52:15.401020397Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.939071025s" Jan 23 18:52:15.401557 containerd[2555]: time="2026-01-23T18:52:15.401052213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 18:52:15.401766 containerd[2555]: time="2026-01-23T18:52:15.401741194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 18:52:16.007450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:52:16.009318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:16.503582 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 23 18:52:16.553662 kernel: audit: type=1130 audit(1769194336.498:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:16.553706 kernel: audit: type=1131 audit(1769194336.539:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:16.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:16.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:16.498588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:16.507663 (kubelet)[3304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:52:16.554020 kubelet[3304]: E0123 18:52:16.538944 3304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:52:16.539888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:52:16.539977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:52:16.540353 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110.2M memory peak. Jan 23 18:52:17.479071 update_engine[2502]: I20260123 18:52:17.479012 2502 update_attempter.cc:509] Updating boot flags... Jan 23 18:52:17.483501 containerd[2555]: time="2026-01-23T18:52:17.483425773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:17.487680 containerd[2555]: time="2026-01-23T18:52:17.487640647Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 23 18:52:17.491049 containerd[2555]: time="2026-01-23T18:52:17.490229182Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:17.494727 containerd[2555]: time="2026-01-23T18:52:17.494680675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:17.495567 containerd[2555]: time="2026-01-23T18:52:17.495539427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.093768781s" Jan 23 18:52:17.495630 containerd[2555]: time="2026-01-23T18:52:17.495581669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 18:52:17.496762 containerd[2555]: time="2026-01-23T18:52:17.496740636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 18:52:18.686456 containerd[2555]: time="2026-01-23T18:52:18.686402687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:18.688800 containerd[2555]: time="2026-01-23T18:52:18.688627187Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 23 18:52:18.691102 containerd[2555]: time="2026-01-23T18:52:18.691076316Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:18.694709 containerd[2555]: time="2026-01-23T18:52:18.694681254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:18.695837 containerd[2555]: time="2026-01-23T18:52:18.695417078Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.198650519s" Jan 23 18:52:18.695837 containerd[2555]: time="2026-01-23T18:52:18.695446327Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 18:52:18.696035 containerd[2555]: time="2026-01-23T18:52:18.696001853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 18:52:19.756135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264710011.mount: Deactivated successfully. Jan 23 18:52:20.108932 containerd[2555]: time="2026-01-23T18:52:20.108726862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:20.110852 containerd[2555]: time="2026-01-23T18:52:20.110826521Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 23 18:52:20.113270 containerd[2555]: time="2026-01-23T18:52:20.113248316Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:20.116295 containerd[2555]: time="2026-01-23T18:52:20.116271765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:20.116709 containerd[2555]: time="2026-01-23T18:52:20.116588339Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.420551849s" Jan 23 18:52:20.116709 containerd[2555]: time="2026-01-23T18:52:20.116620443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 18:52:20.117175 containerd[2555]: time="2026-01-23T18:52:20.117155673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 18:52:20.762396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788750653.mount: Deactivated successfully. Jan 23 18:52:21.614123 containerd[2555]: time="2026-01-23T18:52:21.614073067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:21.616284 containerd[2555]: time="2026-01-23T18:52:21.616255868Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17692313" Jan 23 18:52:21.618650 containerd[2555]: time="2026-01-23T18:52:21.618609361Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:21.621951 containerd[2555]: time="2026-01-23T18:52:21.621909501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:21.622767 containerd[2555]: time="2026-01-23T18:52:21.622555138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.505374956s" Jan 23 18:52:21.622767 containerd[2555]: time="2026-01-23T18:52:21.622583607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 18:52:21.623001 containerd[2555]: time="2026-01-23T18:52:21.622981242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:52:22.131329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781950203.mount: Deactivated successfully. Jan 23 18:52:22.146851 containerd[2555]: time="2026-01-23T18:52:22.146812133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:22.149300 containerd[2555]: time="2026-01-23T18:52:22.149161930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:52:22.151833 containerd[2555]: time="2026-01-23T18:52:22.151808577Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:22.155245 containerd[2555]: time="2026-01-23T18:52:22.155217340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:22.155726 containerd[2555]: time="2026-01-23T18:52:22.155706240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 532.69688ms" Jan 23 18:52:22.155812 containerd[2555]: time="2026-01-23T18:52:22.155800499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:52:22.156273 containerd[2555]: time="2026-01-23T18:52:22.156248285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 18:52:22.790578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293444357.mount: Deactivated successfully. Jan 23 18:52:24.461064 containerd[2555]: time="2026-01-23T18:52:24.461016186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:24.463267 containerd[2555]: time="2026-01-23T18:52:24.463115723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502673" Jan 23 18:52:24.465551 containerd[2555]: time="2026-01-23T18:52:24.465526058Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:24.469144 containerd[2555]: time="2026-01-23T18:52:24.469114994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:24.470455 containerd[2555]: time="2026-01-23T18:52:24.470096991Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.313823103s" Jan 23 18:52:24.470455 containerd[2555]: time="2026-01-23T18:52:24.470125640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 18:52:26.600837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 18:52:26.602250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:26.612857 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:52:26.612911 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:52:26.613147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:26.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:26.617518 kernel: audit: type=1130 audit(1769194346.611:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:26.619094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:26.646532 systemd[1]: Reload requested from client PID 3492 ('systemctl') (unit session-10.scope)... Jan 23 18:52:26.646545 systemd[1]: Reloading... Jan 23 18:52:26.735522 zram_generator::config[3538]: No configuration found. Jan 23 18:52:26.946815 systemd[1]: Reloading finished in 299 ms. Jan 23 18:52:26.976186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:52:26.976254 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:52:26.976518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:26.976561 systemd[1]: kubelet.service: Consumed 74ms CPU time, 75.5M memory peak. Jan 23 18:52:26.983506 kernel: audit: type=1130 audit(1769194346.975:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:26.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:52:26.981992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:26.981000 audit: BPF prog-id=87 op=LOAD Jan 23 18:52:26.981000 audit: BPF prog-id=77 op=UNLOAD Jan 23 18:52:26.987225 kernel: audit: type=1334 audit(1769194346.981:320): prog-id=87 op=LOAD Jan 23 18:52:26.987313 kernel: audit: type=1334 audit(1769194346.981:321): prog-id=77 op=UNLOAD Jan 23 18:52:26.982000 audit: BPF prog-id=88 op=LOAD Jan 23 18:52:26.991318 kernel: audit: type=1334 audit(1769194346.982:322): prog-id=88 op=LOAD Jan 23 18:52:26.991378 kernel: audit: type=1334 audit(1769194346.982:323): prog-id=78 op=UNLOAD Jan 23 18:52:26.982000 audit: BPF prog-id=78 op=UNLOAD Jan 23 18:52:26.992780 kernel: audit: type=1334 audit(1769194346.982:324): prog-id=89 op=LOAD Jan 23 18:52:26.982000 audit: BPF prog-id=89 op=LOAD Jan 23 18:52:26.994091 kernel: audit: type=1334 audit(1769194346.982:325): prog-id=90 op=LOAD Jan 23 18:52:26.982000 audit: BPF prog-id=90 op=LOAD Jan 23 18:52:26.995512 kernel: audit: type=1334 audit(1769194346.982:326): prog-id=79 op=UNLOAD Jan 23 18:52:26.982000 audit: BPF prog-id=79 op=UNLOAD Jan 23 18:52:26.997090 kernel: audit: type=1334 audit(1769194346.982:327): prog-id=80 op=UNLOAD Jan 23 18:52:26.982000 audit: BPF prog-id=80 op=UNLOAD Jan 23 18:52:26.983000 audit: BPF prog-id=91 op=LOAD Jan 23 18:52:26.986000 audit: BPF prog-id=67 op=UNLOAD Jan 23 18:52:26.986000 audit: BPF prog-id=92 op=LOAD Jan 23 18:52:26.986000 audit: BPF prog-id=93 op=LOAD Jan 23 18:52:26.986000 audit: BPF prog-id=68 op=UNLOAD Jan 23 18:52:26.986000 audit: BPF prog-id=69 op=UNLOAD Jan 23 18:52:26.989000 audit: BPF prog-id=94 op=LOAD Jan 23 18:52:26.989000 audit: BPF prog-id=70 op=UNLOAD Jan 23 18:52:26.989000 audit: BPF prog-id=95 op=LOAD Jan 23 18:52:26.989000 audit: BPF prog-id=96 op=LOAD Jan 23 18:52:26.989000 audit: BPF prog-id=71 op=UNLOAD Jan 23 18:52:26.989000 audit: BPF prog-id=72 op=UNLOAD Jan 23 18:52:26.992000 audit: BPF prog-id=97 op=LOAD Jan 23 18:52:26.992000 audit: BPF prog-id=81 op=UNLOAD Jan 23 18:52:26.992000 audit: BPF prog-id=98 op=LOAD Jan 23 18:52:26.992000 audit: BPF prog-id=99 op=LOAD Jan 23 18:52:26.992000 audit: BPF prog-id=82 op=UNLOAD Jan 23 18:52:26.992000 audit: BPF prog-id=83 op=UNLOAD Jan 23 18:52:26.993000 audit: BPF prog-id=100 op=LOAD Jan 23 18:52:26.993000 audit: BPF prog-id=84 op=UNLOAD Jan 23 18:52:26.993000 audit: BPF prog-id=101 op=LOAD Jan 23 18:52:26.993000 audit: BPF prog-id=102 op=LOAD Jan 23 18:52:26.993000 audit: BPF prog-id=85 op=UNLOAD Jan 23 18:52:26.993000 audit: BPF prog-id=86 op=UNLOAD Jan 23 18:52:26.995000 audit: BPF prog-id=103 op=LOAD Jan 23 18:52:26.995000 audit: BPF prog-id=73 op=UNLOAD Jan 23 18:52:26.995000 audit: BPF prog-id=104 op=LOAD Jan 23 18:52:26.995000 audit: BPF prog-id=74 op=UNLOAD Jan 23 18:52:26.996000 audit: BPF prog-id=105 op=LOAD Jan 23 18:52:26.996000 audit: BPF prog-id=106 op=LOAD Jan 23 18:52:26.996000 audit: BPF prog-id=75 op=UNLOAD Jan 23 18:52:26.996000 audit: BPF prog-id=76 op=UNLOAD Jan 23 18:52:27.486602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:27.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:27.497796 (kubelet)[3609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:52:27.533086 kubelet[3609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:52:27.533515 kubelet[3609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:52:27.533515 kubelet[3609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:52:27.533515 kubelet[3609]: I0123 18:52:27.533328 3609 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:52:27.752097 kubelet[3609]: I0123 18:52:27.751998 3609 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:52:27.752097 kubelet[3609]: I0123 18:52:27.752022 3609 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:52:27.752640 kubelet[3609]: I0123 18:52:27.752621 3609 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:52:27.780511 kubelet[3609]: E0123 18:52:27.780121 3609 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:52:27.781113 kubelet[3609]: I0123 18:52:27.781013 3609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:52:27.790294 kubelet[3609]: I0123 18:52:27.790271 3609 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:52:27.792803 kubelet[3609]: I0123 18:52:27.792774 3609 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:52:27.792979 kubelet[3609]: I0123 18:52:27.792950 3609 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:52:27.793119 kubelet[3609]: I0123 18:52:27.792979 3609 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-90f1f3b2aa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:52:27.793642 kubelet[3609]: I0123 18:52:27.793628 3609 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:52:27.793681 kubelet[3609]: I0123 18:52:27.793643 3609 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:52:27.793752 kubelet[3609]: I0123 18:52:27.793741 3609 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:27.796723 kubelet[3609]: I0123 18:52:27.796707 3609 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:52:27.796791 kubelet[3609]: I0123 18:52:27.796732 3609 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:52:27.796791 kubelet[3609]: I0123 18:52:27.796754 3609 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:52:27.796791 kubelet[3609]: I0123 18:52:27.796765 3609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:52:27.803349 kubelet[3609]: W0123 18:52:27.802834 3609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 23 18:52:27.803349 kubelet[3609]: E0123 18:52:27.802896 3609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:52:27.803349 kubelet[3609]: W0123 18:52:27.803130 3609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-90f1f3b2aa&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 23 18:52:27.803349 kubelet[3609]: E0123 18:52:27.803152 3609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-90f1f3b2aa&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:52:27.803633 kubelet[3609]: I0123 18:52:27.803622 3609 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:52:27.804015 kubelet[3609]: I0123 18:52:27.803980 3609 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:52:27.804073 kubelet[3609]: W0123 18:52:27.804030 3609 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:52:27.806316 kubelet[3609]: I0123 18:52:27.806290 3609 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:52:27.806373 kubelet[3609]: I0123 18:52:27.806336 3609 server.go:1287] "Started kubelet" Jan 23 18:52:27.807536 kubelet[3609]: I0123 18:52:27.807373 3609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:52:27.809649 kubelet[3609]: I0123 18:52:27.809622 3609 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:52:27.808000 audit[3620]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.808000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc6554aad0 a2=0 a3=0 items=0 ppid=3609 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:52:27.810825 kubelet[3609]: I0123 18:52:27.810811 3609 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:52:27.809000 audit[3621]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.809000 audit[3621]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd80341420 a2=0 a3=0 items=0 ppid=3609 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:52:27.811966 kubelet[3609]: I0123 18:52:27.811924 3609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:52:27.812180 kubelet[3609]: I0123 18:52:27.812170 3609 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:52:27.813431 kubelet[3609]: I0123 18:52:27.812650 3609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:52:27.814006 kubelet[3609]: E0123 18:52:27.812336 3609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-90f1f3b2aa.188d70e2bf9fa993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-90f1f3b2aa,UID:ci-4547.1.0-a-90f1f3b2aa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-90f1f3b2aa,},FirstTimestamp:2026-01-23 18:52:27.806304659 +0000 UTC m=+0.304867412,LastTimestamp:2026-01-23 18:52:27.806304659 +0000 UTC m=+0.304867412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-90f1f3b2aa,}" Jan 23 18:52:27.814696 kubelet[3609]: I0123 18:52:27.814684 3609 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:52:27.815674 kubelet[3609]: E0123 18:52:27.815658 3609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" Jan 23 18:52:27.815915 kubelet[3609]: I0123 18:52:27.815901 3609 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:52:27.816001 kubelet[3609]: I0123 18:52:27.815995 3609 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:52:27.816110 kubelet[3609]: E0123 18:52:27.816093 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-90f1f3b2aa?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="200ms" Jan 23 18:52:27.816649 kubelet[3609]: W0123 18:52:27.816632 3609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 23 18:52:27.817394 kubelet[3609]: E0123 18:52:27.816721 3609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:52:27.817394 kubelet[3609]: I0123 18:52:27.817039 3609 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:52:27.817394 kubelet[3609]: I0123 18:52:27.817093 3609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:52:27.815000 audit[3623]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.815000 audit[3623]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6d8c2840 a2=0 a3=0 items=0 ppid=3609 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:52:27.818447 kubelet[3609]: I0123 18:52:27.818433 3609 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:52:27.818000 audit[3625]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.818000 audit[3625]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff36817c70 a2=0 a3=0 items=0 ppid=3609 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:52:27.822351 kubelet[3609]: E0123 18:52:27.822336 3609 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:52:27.849000 audit[3631]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.849000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc0abb00e0 a2=0 a3=0 items=0 ppid=3609 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 18:52:27.851052 kubelet[3609]: I0123 18:52:27.851026 3609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:52:27.851000 audit[3634]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:27.851000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff001eaf40 a2=0 a3=0 items=0 ppid=3609 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:52:27.851000 audit[3633]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.851000 audit[3633]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcae6ca00 a2=0 a3=0 items=0 ppid=3609 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:52:27.853524 kubelet[3609]: I0123 18:52:27.853367 3609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:52:27.853524 kubelet[3609]: I0123 18:52:27.853517 3609 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:52:27.853579 kubelet[3609]: I0123 18:52:27.853537 3609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:52:27.853579 kubelet[3609]: I0123 18:52:27.853543 3609 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:52:27.853621 kubelet[3609]: E0123 18:52:27.853583 3609 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:52:27.854536 kubelet[3609]: I0123 18:52:27.854385 3609 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:52:27.854536 kubelet[3609]: I0123 18:52:27.854398 3609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:52:27.854536 kubelet[3609]: I0123 18:52:27.854411 3609 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:27.853000 audit[3637]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.853000 audit[3637]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2b466590 a2=0 a3=0 items=0 ppid=3609 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:52:27.855247 kubelet[3609]: W0123 18:52:27.855219 3609 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 23 18:52:27.855631 kubelet[3609]: E0123 18:52:27.855572 3609 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:52:27.855000 audit[3639]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:27.855000 audit[3636]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:27.855000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcb071dd0 a2=0 a3=0 items=0 ppid=3609 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:52:27.855000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3b247bb0 a2=0 a3=0 items=0 ppid=3609 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:52:27.856000 audit[3640]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:27.856000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6821c4f0 a2=0 a3=0 items=0 ppid=3609 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:52:27.857000 audit[3641]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:27.857000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffec889e10 a2=0 a3=0 items=0 ppid=3609 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:27.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:52:27.862350 kubelet[3609]: I0123 18:52:27.862339 3609 policy_none.go:49] "None policy: Start" Jan 23 18:52:27.862405 kubelet[3609]: I0123 18:52:27.862400 3609 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:52:27.862435 kubelet[3609]: I0123 18:52:27.862431 3609 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:52:27.869236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:52:27.879528 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:52:27.882183 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:52:27.892969 kubelet[3609]: I0123 18:52:27.892951 3609 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:52:27.893106 kubelet[3609]: I0123 18:52:27.893089 3609 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:52:27.893142 kubelet[3609]: I0123 18:52:27.893106 3609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:52:27.893845 kubelet[3609]: I0123 18:52:27.893686 3609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:52:27.895126 kubelet[3609]: E0123 18:52:27.894948 3609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:52:27.895126 kubelet[3609]: E0123 18:52:27.894982 3609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.1.0-a-90f1f3b2aa\" not found" Jan 23 18:52:27.962320 systemd[1]: Created slice kubepods-burstable-poda34ec8d3dea61f46f14080c90fd664a7.slice - libcontainer container kubepods-burstable-poda34ec8d3dea61f46f14080c90fd664a7.slice. Jan 23 18:52:27.981661 kubelet[3609]: E0123 18:52:27.981596 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:27.984165 systemd[1]: Created slice kubepods-burstable-poddf59ea939260846afca1ca1859ef6044.slice - libcontainer container kubepods-burstable-poddf59ea939260846afca1ca1859ef6044.slice. Jan 23 18:52:27.985649 kubelet[3609]: E0123 18:52:27.985634 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:27.994767 kubelet[3609]: I0123 18:52:27.994754 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:27.995300 kubelet[3609]: E0123 18:52:27.995280 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:27.996902 systemd[1]: Created slice kubepods-burstable-podea0ac50597651ed2b4317fa758445c71.slice - libcontainer container kubepods-burstable-podea0ac50597651ed2b4317fa758445c71.slice. Jan 23 18:52:27.998357 kubelet[3609]: E0123 18:52:27.998338 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016224 kubelet[3609]: I0123 18:52:28.016203 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016296 kubelet[3609]: I0123 18:52:28.016231 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016296 kubelet[3609]: I0123 18:52:28.016250 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016296 kubelet[3609]: I0123 18:52:28.016267 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016296 kubelet[3609]: I0123 18:52:28.016283 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016396 kubelet[3609]: I0123 18:52:28.016299 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016396 kubelet[3609]: I0123 18:52:28.016317 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016396 kubelet[3609]: I0123 18:52:28.016334 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016396 kubelet[3609]: I0123 18:52:28.016351 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea0ac50597651ed2b4317fa758445c71-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"ea0ac50597651ed2b4317fa758445c71\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.016658 kubelet[3609]: E0123 18:52:28.016638 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-90f1f3b2aa?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="400ms" Jan 23 18:52:28.197296 kubelet[3609]: I0123 18:52:28.197269 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.197627 kubelet[3609]: E0123 18:52:28.197602 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.282759 containerd[2555]: time="2026-01-23T18:52:28.282670759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-90f1f3b2aa,Uid:a34ec8d3dea61f46f14080c90fd664a7,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:28.287094 containerd[2555]: time="2026-01-23T18:52:28.287064581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa,Uid:df59ea939260846afca1ca1859ef6044,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:28.299688 containerd[2555]: time="2026-01-23T18:52:28.299661732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-90f1f3b2aa,Uid:ea0ac50597651ed2b4317fa758445c71,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:28.341421 containerd[2555]: time="2026-01-23T18:52:28.341385428Z" level=info msg="connecting to shim e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293" address="unix:///run/containerd/s/caceca2b2828d51f08c092d3e181a8c1e671c205ad21495b102607999e8f87d9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:28.369997 systemd[1]: Started cri-containerd-e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293.scope - libcontainer container e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293. Jan 23 18:52:28.373582 containerd[2555]: time="2026-01-23T18:52:28.370446763Z" level=info msg="connecting to shim 5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2" address="unix:///run/containerd/s/c6379d816e380dbdf04810ab7c34b121fc579ca739fb96c9d56e55c038b83b95" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:28.395858 containerd[2555]: time="2026-01-23T18:52:28.395828664Z" level=info msg="connecting to shim 1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216" address="unix:///run/containerd/s/c12349f904350387849a7f7d310ce1f2b46460cc4751d6e22f5b909bc60a169b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:28.396664 systemd[1]: Started cri-containerd-5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2.scope - libcontainer container 5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2. Jan 23 18:52:28.397000 audit: BPF prog-id=107 op=LOAD Jan 23 18:52:28.397000 audit: BPF prog-id=108 op=LOAD Jan 23 18:52:28.397000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.397000 audit: BPF prog-id=108 op=UNLOAD Jan 23 18:52:28.397000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.398000 audit: BPF prog-id=109 op=LOAD Jan 23 18:52:28.398000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.398000 audit: BPF prog-id=110 op=LOAD Jan 23 18:52:28.398000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.398000 audit: BPF prog-id=110 op=UNLOAD Jan 23 18:52:28.398000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.398000 audit: BPF prog-id=109 op=UNLOAD Jan 23 18:52:28.398000 audit[3662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.398000 audit: BPF prog-id=111 op=LOAD Jan 23 18:52:28.398000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3651 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536303834363665383131636439303837623039396539343635666432 Jan 23 18:52:28.417865 kubelet[3609]: E0123 18:52:28.417815 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-90f1f3b2aa?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="800ms" Jan 23 18:52:28.416000 audit: BPF prog-id=112 op=LOAD Jan 23 18:52:28.417000 audit: BPF prog-id=113 op=LOAD Jan 23 18:52:28.417000 audit[3692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.418000 audit: BPF prog-id=113 op=UNLOAD Jan 23 18:52:28.418000 audit[3692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.418000 audit: BPF prog-id=114 op=LOAD Jan 23 18:52:28.418000 audit[3692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.419000 audit: BPF prog-id=115 op=LOAD Jan 23 18:52:28.419000 audit[3692]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.419000 audit: BPF prog-id=115 op=UNLOAD Jan 23 18:52:28.419000 audit[3692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.419000 audit: BPF prog-id=114 op=UNLOAD Jan 23 18:52:28.419000 audit[3692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.419000 audit: BPF prog-id=116 op=LOAD Jan 23 18:52:28.419000 audit[3692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3682 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313435393765316334363061633834636562643964333739653438 Jan 23 18:52:28.428867 systemd[1]: Started cri-containerd-1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216.scope - libcontainer container 1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216. Jan 23 18:52:28.450000 audit: BPF prog-id=117 op=LOAD Jan 23 18:52:28.451000 audit: BPF prog-id=118 op=LOAD Jan 23 18:52:28.451000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.451000 audit: BPF prog-id=118 op=UNLOAD Jan 23 18:52:28.451000 audit[3734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.452000 audit: BPF prog-id=119 op=LOAD Jan 23 18:52:28.452000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.452000 audit: BPF prog-id=120 op=LOAD Jan 23 18:52:28.452000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.453000 audit: BPF prog-id=120 op=UNLOAD Jan 23 18:52:28.453000 audit[3734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.453000 audit: BPF prog-id=119 op=UNLOAD Jan 23 18:52:28.453000 audit[3734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.453000 audit: BPF prog-id=121 op=LOAD Jan 23 18:52:28.453000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3720 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164366139396563653338623139366265343535396162646630333132 Jan 23 18:52:28.457734 containerd[2555]: time="2026-01-23T18:52:28.457709239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-90f1f3b2aa,Uid:a34ec8d3dea61f46f14080c90fd664a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293\"" Jan 23 18:52:28.463839 containerd[2555]: time="2026-01-23T18:52:28.463767377Z" level=info msg="CreateContainer within sandbox \"e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:52:28.482517 containerd[2555]: time="2026-01-23T18:52:28.482493794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa,Uid:df59ea939260846afca1ca1859ef6044,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2\"" Jan 23 18:52:28.486049 containerd[2555]: time="2026-01-23T18:52:28.485811770Z" level=info msg="CreateContainer within sandbox \"5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:52:28.488739 containerd[2555]: time="2026-01-23T18:52:28.488718902Z" level=info msg="Container 63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:28.509291 containerd[2555]: time="2026-01-23T18:52:28.509267479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-90f1f3b2aa,Uid:ea0ac50597651ed2b4317fa758445c71,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216\"" Jan 23 18:52:28.511702 containerd[2555]: time="2026-01-23T18:52:28.511677787Z" level=info msg="CreateContainer within sandbox \"1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:52:28.516209 containerd[2555]: time="2026-01-23T18:52:28.516184522Z" level=info msg="CreateContainer within sandbox \"e608466e811cd9087b099e9465fd21e1753516a108daae00f7701595d9d89293\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b\"" Jan 23 18:52:28.516738 containerd[2555]: time="2026-01-23T18:52:28.516717517Z" level=info msg="StartContainer for \"63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b\"" Jan 23 18:52:28.517593 containerd[2555]: time="2026-01-23T18:52:28.517568348Z" level=info msg="connecting to shim 63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b" address="unix:///run/containerd/s/caceca2b2828d51f08c092d3e181a8c1e671c205ad21495b102607999e8f87d9" protocol=ttrpc version=3 Jan 23 18:52:28.521305 containerd[2555]: time="2026-01-23T18:52:28.521237751Z" level=info msg="Container 39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:28.535656 systemd[1]: Started cri-containerd-63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b.scope - libcontainer container 63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b. Jan 23 18:52:28.539802 containerd[2555]: time="2026-01-23T18:52:28.539699786Z" level=info msg="CreateContainer within sandbox \"5d14597e1c460ac84cebd9d379e4859f7a0430541b43c5fb16ffe81af2830ff2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e\"" Jan 23 18:52:28.540228 containerd[2555]: time="2026-01-23T18:52:28.540210152Z" level=info msg="StartContainer for \"39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e\"" Jan 23 18:52:28.541436 containerd[2555]: time="2026-01-23T18:52:28.541411811Z" level=info msg="connecting to shim 39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e" address="unix:///run/containerd/s/c6379d816e380dbdf04810ab7c34b121fc579ca739fb96c9d56e55c038b83b95" protocol=ttrpc version=3 Jan 23 18:52:28.543389 containerd[2555]: time="2026-01-23T18:52:28.543270053Z" level=info msg="Container c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:28.551000 audit: BPF prog-id=122 op=LOAD Jan 23 18:52:28.552000 audit: BPF prog-id=123 op=LOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=123 op=UNLOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=124 op=LOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=125 op=LOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=125 op=UNLOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=124 op=UNLOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.552000 audit: BPF prog-id=126 op=LOAD Jan 23 18:52:28.552000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3651 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636539346639653466653965666636303064373633336230633366 Jan 23 18:52:28.561266 systemd[1]: Started cri-containerd-39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e.scope - libcontainer container 39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e. Jan 23 18:52:28.562979 containerd[2555]: time="2026-01-23T18:52:28.562958395Z" level=info msg="CreateContainer within sandbox \"1d6a99ece38b196be4559abdf031285fdd77dd4431ea0b8acd89255f0b6de216\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0\"" Jan 23 18:52:28.565618 kubelet[3609]: E0123 18:52:28.565504 3609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-90f1f3b2aa.188d70e2bf9fa993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-90f1f3b2aa,UID:ci-4547.1.0-a-90f1f3b2aa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-90f1f3b2aa,},FirstTimestamp:2026-01-23 18:52:27.806304659 +0000 UTC m=+0.304867412,LastTimestamp:2026-01-23 18:52:27.806304659 +0000 UTC m=+0.304867412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-90f1f3b2aa,}" Jan 23 18:52:28.566248 containerd[2555]: time="2026-01-23T18:52:28.566110561Z" level=info msg="StartContainer for \"c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0\"" Jan 23 18:52:28.567283 containerd[2555]: time="2026-01-23T18:52:28.567259063Z" level=info msg="connecting to shim c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0" address="unix:///run/containerd/s/c12349f904350387849a7f7d310ce1f2b46460cc4751d6e22f5b909bc60a169b" protocol=ttrpc version=3 Jan 23 18:52:28.585716 systemd[1]: Started cri-containerd-c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0.scope - libcontainer container c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0. Jan 23 18:52:28.592000 audit: BPF prog-id=127 op=LOAD Jan 23 18:52:28.592000 audit: BPF prog-id=128 op=LOAD Jan 23 18:52:28.592000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.593000 audit: BPF prog-id=128 op=UNLOAD Jan 23 18:52:28.593000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.594000 audit: BPF prog-id=129 op=LOAD Jan 23 18:52:28.594000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.594000 audit: BPF prog-id=130 op=LOAD Jan 23 18:52:28.594000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.594000 audit: BPF prog-id=130 op=UNLOAD Jan 23 18:52:28.594000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.594000 audit: BPF prog-id=129 op=UNLOAD Jan 23 18:52:28.594000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.597000 audit: BPF prog-id=131 op=LOAD Jan 23 18:52:28.597000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3682 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339633264336530306466646234386636633137616532383230323863 Jan 23 18:52:28.605887 kubelet[3609]: I0123 18:52:28.605552 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.605887 kubelet[3609]: E0123 18:52:28.605852 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.613379 containerd[2555]: time="2026-01-23T18:52:28.613339537Z" level=info msg="StartContainer for \"63ce94f9e4fe9eff600d7633b0c3ff3444f673e28add0c9858ae88df42b4403b\" returns successfully" Jan 23 18:52:28.613000 audit: BPF prog-id=132 op=LOAD Jan 23 18:52:28.614000 audit: BPF prog-id=133 op=LOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.614000 audit: BPF prog-id=133 op=UNLOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.614000 audit: BPF prog-id=134 op=LOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.614000 audit: BPF prog-id=135 op=LOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.614000 audit: BPF prog-id=135 op=UNLOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.614000 audit: BPF prog-id=134 op=UNLOAD Jan 23 18:52:28.614000 audit[3811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.615000 audit: BPF prog-id=136 op=LOAD Jan 23 18:52:28.615000 audit[3811]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3720 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:28.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303338336636363934316437333964366235356365653636306134 Jan 23 18:52:28.678618 containerd[2555]: time="2026-01-23T18:52:28.678592774Z" level=info msg="StartContainer for \"c70383f66941d739d6b55cee660a4e66930ea5221659db5d076a6451a70d61f0\" returns successfully" Jan 23 18:52:28.679813 containerd[2555]: time="2026-01-23T18:52:28.679779818Z" level=info msg="StartContainer for \"39c2d3e00dfdb48f6c17ae282028cf7c0bb460e5ac6043c25276743eb7dce53e\" returns successfully" Jan 23 18:52:28.866571 kubelet[3609]: E0123 18:52:28.864692 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.869128 kubelet[3609]: E0123 18:52:28.869107 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:28.872669 kubelet[3609]: E0123 18:52:28.871622 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:29.408614 kubelet[3609]: I0123 18:52:29.408096 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:29.873995 kubelet[3609]: E0123 18:52:29.873950 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:29.876246 kubelet[3609]: E0123 18:52:29.876110 3609 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.743225 kubelet[3609]: E0123 18:52:30.743179 3609 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.1.0-a-90f1f3b2aa\" not found" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.801503 kubelet[3609]: I0123 18:52:30.800856 3609 apiserver.go:52] "Watching apiserver" Jan 23 18:52:30.813595 kubelet[3609]: I0123 18:52:30.813569 3609 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.816850 kubelet[3609]: I0123 18:52:30.816818 3609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.873809 kubelet[3609]: I0123 18:52:30.873749 3609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.885903 kubelet[3609]: E0123 18:52:30.885755 3609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.885903 kubelet[3609]: I0123 18:52:30.885780 3609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.886521 kubelet[3609]: E0123 18:52:30.886504 3609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.889778 kubelet[3609]: E0123 18:52:30.889751 3609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.889778 kubelet[3609]: I0123 18:52:30.889779 3609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.894868 kubelet[3609]: E0123 18:52:30.894846 3609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-90f1f3b2aa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:30.916139 kubelet[3609]: I0123 18:52:30.916121 3609 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:52:32.935491 systemd[1]: Reload requested from client PID 3873 ('systemctl') (unit session-10.scope)... Jan 23 18:52:32.935506 systemd[1]: Reloading... Jan 23 18:52:33.024535 zram_generator::config[3925]: No configuration found. Jan 23 18:52:33.178237 kubelet[3609]: I0123 18:52:33.178207 3609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:33.185868 kubelet[3609]: W0123 18:52:33.185788 3609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:33.229798 systemd[1]: Reloading finished in 294 ms. Jan 23 18:52:33.262163 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:33.282413 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:52:33.282681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:33.290885 kernel: kauditd_printk_skb: 201 callbacks suppressed Jan 23 18:52:33.290933 kernel: audit: type=1131 audit(1769194353.282:421): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:33.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:33.282738 systemd[1]: kubelet.service: Consumed 600ms CPU time, 130.7M memory peak. Jan 23 18:52:33.286716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:33.286000 audit: BPF prog-id=137 op=LOAD Jan 23 18:52:33.286000 audit: BPF prog-id=138 op=LOAD Jan 23 18:52:33.294545 kernel: audit: type=1334 audit(1769194353.286:422): prog-id=137 op=LOAD Jan 23 18:52:33.294578 kernel: audit: type=1334 audit(1769194353.286:423): prog-id=138 op=LOAD Jan 23 18:52:33.286000 audit: BPF prog-id=92 op=UNLOAD Jan 23 18:52:33.296016 kernel: audit: type=1334 audit(1769194353.286:424): prog-id=92 op=UNLOAD Jan 23 18:52:33.286000 audit: BPF prog-id=93 op=UNLOAD Jan 23 18:52:33.297154 kernel: audit: type=1334 audit(1769194353.286:425): prog-id=93 op=UNLOAD Jan 23 18:52:33.287000 audit: BPF prog-id=139 op=LOAD Jan 23 18:52:33.298277 kernel: audit: type=1334 audit(1769194353.287:426): prog-id=139 op=LOAD Jan 23 18:52:33.287000 audit: BPF prog-id=91 op=UNLOAD Jan 23 18:52:33.302517 kernel: audit: type=1334 audit(1769194353.287:427): prog-id=91 op=UNLOAD Jan 23 18:52:33.302589 kernel: audit: type=1334 audit(1769194353.291:428): prog-id=140 op=LOAD Jan 23 18:52:33.291000 audit: BPF prog-id=140 op=LOAD Jan 23 18:52:33.291000 audit: BPF prog-id=97 op=UNLOAD Jan 23 18:52:33.305276 kernel: audit: type=1334 audit(1769194353.291:429): prog-id=97 op=UNLOAD Jan 23 18:52:33.305329 kernel: audit: type=1334 audit(1769194353.291:430): prog-id=141 op=LOAD Jan 23 18:52:33.291000 audit: BPF prog-id=141 op=LOAD Jan 23 18:52:33.291000 audit: BPF prog-id=142 op=LOAD Jan 23 18:52:33.291000 audit: BPF prog-id=98 op=UNLOAD Jan 23 18:52:33.291000 audit: BPF prog-id=99 op=UNLOAD Jan 23 18:52:33.292000 audit: BPF prog-id=143 op=LOAD Jan 23 18:52:33.292000 audit: BPF prog-id=100 op=UNLOAD Jan 23 18:52:33.292000 audit: BPF prog-id=144 op=LOAD Jan 23 18:52:33.292000 audit: BPF prog-id=145 op=LOAD Jan 23 18:52:33.292000 audit: BPF prog-id=101 op=UNLOAD Jan 23 18:52:33.292000 audit: BPF prog-id=102 op=UNLOAD Jan 23 18:52:33.298000 audit: BPF prog-id=146 op=LOAD Jan 23 18:52:33.298000 audit: BPF prog-id=104 op=UNLOAD Jan 23 18:52:33.298000 audit: BPF prog-id=147 op=LOAD Jan 23 18:52:33.298000 audit: BPF prog-id=148 op=LOAD Jan 23 18:52:33.298000 audit: BPF prog-id=105 op=UNLOAD Jan 23 18:52:33.298000 audit: BPF prog-id=106 op=UNLOAD Jan 23 18:52:33.299000 audit: BPF prog-id=149 op=LOAD Jan 23 18:52:33.299000 audit: BPF prog-id=88 op=UNLOAD Jan 23 18:52:33.299000 audit: BPF prog-id=150 op=LOAD Jan 23 18:52:33.299000 audit: BPF prog-id=151 op=LOAD Jan 23 18:52:33.299000 audit: BPF prog-id=89 op=UNLOAD Jan 23 18:52:33.299000 audit: BPF prog-id=90 op=UNLOAD Jan 23 18:52:33.300000 audit: BPF prog-id=152 op=LOAD Jan 23 18:52:33.300000 audit: BPF prog-id=87 op=UNLOAD Jan 23 18:52:33.302000 audit: BPF prog-id=153 op=LOAD Jan 23 18:52:33.302000 audit: BPF prog-id=103 op=UNLOAD Jan 23 18:52:33.304000 audit: BPF prog-id=154 op=LOAD Jan 23 18:52:33.304000 audit: BPF prog-id=94 op=UNLOAD Jan 23 18:52:33.305000 audit: BPF prog-id=155 op=LOAD Jan 23 18:52:33.305000 audit: BPF prog-id=156 op=LOAD Jan 23 18:52:33.305000 audit: BPF prog-id=95 op=UNLOAD Jan 23 18:52:33.305000 audit: BPF prog-id=96 op=UNLOAD Jan 23 18:52:33.784621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:33.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:33.793733 (kubelet)[3990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:52:33.831872 kubelet[3990]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:52:33.831872 kubelet[3990]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:52:33.831872 kubelet[3990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:52:33.832131 kubelet[3990]: I0123 18:52:33.831925 3990 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:52:33.838547 kubelet[3990]: I0123 18:52:33.838063 3990 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:52:33.838547 kubelet[3990]: I0123 18:52:33.838078 3990 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:52:33.838547 kubelet[3990]: I0123 18:52:33.838236 3990 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:52:33.839236 kubelet[3990]: I0123 18:52:33.839224 3990 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 18:52:33.840860 kubelet[3990]: I0123 18:52:33.840845 3990 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:52:33.847027 kubelet[3990]: I0123 18:52:33.846786 3990 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:52:33.849949 kubelet[3990]: I0123 18:52:33.849932 3990 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:52:33.851615 kubelet[3990]: I0123 18:52:33.850499 3990 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:52:33.851615 kubelet[3990]: I0123 18:52:33.850524 3990 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-90f1f3b2aa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:52:33.851615 kubelet[3990]: I0123 18:52:33.851240 3990 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:52:33.851615 kubelet[3990]: I0123 18:52:33.851256 3990 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:52:33.851840 kubelet[3990]: I0123 18:52:33.851309 3990 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:33.851840 kubelet[3990]: I0123 18:52:33.851463 3990 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:52:33.851840 kubelet[3990]: I0123 18:52:33.851510 3990 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:52:33.851840 kubelet[3990]: I0123 18:52:33.851535 3990 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:52:33.851840 kubelet[3990]: I0123 18:52:33.851545 3990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:52:33.856657 kubelet[3990]: I0123 18:52:33.856617 3990 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:52:33.857781 kubelet[3990]: I0123 18:52:33.857640 3990 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:52:33.858779 kubelet[3990]: I0123 18:52:33.858683 3990 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:52:33.858950 kubelet[3990]: I0123 18:52:33.858944 3990 server.go:1287] "Started kubelet" Jan 23 18:52:33.859238 kubelet[3990]: I0123 18:52:33.859213 3990 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:52:33.859892 kubelet[3990]: I0123 18:52:33.859775 3990 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:52:33.862797 kubelet[3990]: I0123 18:52:33.862764 3990 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:52:33.862886 kubelet[3990]: I0123 18:52:33.860626 3990 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:52:33.864160 kubelet[3990]: I0123 18:52:33.864092 3990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:52:33.866409 kubelet[3990]: I0123 18:52:33.865672 3990 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:52:33.868731 kubelet[3990]: I0123 18:52:33.868700 3990 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:52:33.869190 kubelet[3990]: E0123 18:52:33.869133 3990 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-90f1f3b2aa\" not found" Jan 23 18:52:33.871179 kubelet[3990]: I0123 18:52:33.871125 3990 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:52:33.871589 kubelet[3990]: I0123 18:52:33.871535 3990 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:52:33.874912 kubelet[3990]: I0123 18:52:33.874048 3990 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:52:33.874912 kubelet[3990]: I0123 18:52:33.874123 3990 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:52:33.879471 kubelet[3990]: I0123 18:52:33.879065 3990 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:52:33.886987 kubelet[3990]: I0123 18:52:33.886968 3990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:52:33.890654 kubelet[3990]: I0123 18:52:33.890627 3990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:52:33.890654 kubelet[3990]: I0123 18:52:33.890657 3990 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:52:33.890747 kubelet[3990]: I0123 18:52:33.890670 3990 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:52:33.890747 kubelet[3990]: I0123 18:52:33.890677 3990 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:52:33.890747 kubelet[3990]: E0123 18:52:33.890712 3990 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:52:33.916141 kubelet[3990]: I0123 18:52:33.916125 3990 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:52:33.916234 kubelet[3990]: I0123 18:52:33.916221 3990 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:52:33.916302 kubelet[3990]: I0123 18:52:33.916297 3990 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:33.916424 kubelet[3990]: I0123 18:52:33.916417 3990 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:52:33.916491 kubelet[3990]: I0123 18:52:33.916451 3990 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:52:33.916491 kubelet[3990]: I0123 18:52:33.916486 3990 policy_none.go:49] "None policy: Start" Jan 23 18:52:33.916540 kubelet[3990]: I0123 18:52:33.916502 3990 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:52:33.916540 kubelet[3990]: I0123 18:52:33.916512 3990 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:52:33.916618 kubelet[3990]: I0123 18:52:33.916607 3990 state_mem.go:75] "Updated machine memory state" Jan 23 18:52:33.921882 kubelet[3990]: I0123 18:52:33.920721 3990 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:52:33.921882 kubelet[3990]: I0123 18:52:33.920838 3990 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:52:33.921882 kubelet[3990]: I0123 18:52:33.920846 3990 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:52:33.921882 kubelet[3990]: I0123 18:52:33.921216 3990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:52:33.924280 kubelet[3990]: E0123 18:52:33.924260 3990 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:52:33.991814 kubelet[3990]: I0123 18:52:33.991393 3990 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:33.991814 kubelet[3990]: I0123 18:52:33.991589 3990 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:33.991814 kubelet[3990]: I0123 18:52:33.991804 3990 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.004677 kubelet[3990]: W0123 18:52:34.004644 3990 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:34.007720 kubelet[3990]: W0123 18:52:34.007598 3990 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:34.008859 kubelet[3990]: W0123 18:52:34.008841 3990 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:34.009015 kubelet[3990]: E0123 18:52:34.008912 3990 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-90f1f3b2aa\" already exists" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.023452 kubelet[3990]: I0123 18:52:34.023441 3990 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.034496 kubelet[3990]: I0123 18:52:34.034434 3990 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.034620 kubelet[3990]: I0123 18:52:34.034592 3990 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074447 kubelet[3990]: I0123 18:52:34.074147 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074447 kubelet[3990]: I0123 18:52:34.074182 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074447 kubelet[3990]: I0123 18:52:34.074213 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074447 kubelet[3990]: I0123 18:52:34.074232 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a34ec8d3dea61f46f14080c90fd664a7-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"a34ec8d3dea61f46f14080c90fd664a7\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074447 kubelet[3990]: I0123 18:52:34.074249 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074744 kubelet[3990]: I0123 18:52:34.074267 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074744 kubelet[3990]: I0123 18:52:34.074292 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074744 kubelet[3990]: I0123 18:52:34.074312 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df59ea939260846afca1ca1859ef6044-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"df59ea939260846afca1ca1859ef6044\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.074744 kubelet[3990]: I0123 18:52:34.074331 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea0ac50597651ed2b4317fa758445c71-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-90f1f3b2aa\" (UID: \"ea0ac50597651ed2b4317fa758445c71\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.857052 kubelet[3990]: I0123 18:52:34.857020 3990 apiserver.go:52] "Watching apiserver" Jan 23 18:52:34.871620 kubelet[3990]: I0123 18:52:34.871589 3990 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:52:34.908980 kubelet[3990]: I0123 18:52:34.908954 3990 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.909387 kubelet[3990]: I0123 18:52:34.909276 3990 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.919202 kubelet[3990]: W0123 18:52:34.919171 3990 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:34.919310 kubelet[3990]: E0123 18:52:34.919224 3990 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-90f1f3b2aa\" already exists" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.919888 kubelet[3990]: W0123 18:52:34.919871 3990 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:52:34.920102 kubelet[3990]: E0123 18:52:34.919912 3990 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-90f1f3b2aa\" already exists" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:52:34.941549 kubelet[3990]: I0123 18:52:34.941501 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.1.0-a-90f1f3b2aa" podStartSLOduration=1.941469128 podStartE2EDuration="1.941469128s" podCreationTimestamp="2026-01-23 18:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:52:34.933112171 +0000 UTC m=+1.136747579" watchObservedRunningTime="2026-01-23 18:52:34.941469128 +0000 UTC m=+1.145104538" Jan 23 18:52:34.950290 kubelet[3990]: I0123 18:52:34.950253 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-90f1f3b2aa" podStartSLOduration=1.950239848 podStartE2EDuration="1.950239848s" podCreationTimestamp="2026-01-23 18:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:52:34.941763083 +0000 UTC m=+1.145398485" watchObservedRunningTime="2026-01-23 18:52:34.950239848 +0000 UTC m=+1.153875251" Jan 23 18:52:34.950599 kubelet[3990]: I0123 18:52:34.950343 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.1.0-a-90f1f3b2aa" podStartSLOduration=1.950335771 podStartE2EDuration="1.950335771s" podCreationTimestamp="2026-01-23 18:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:52:34.950162393 +0000 UTC m=+1.153797796" watchObservedRunningTime="2026-01-23 18:52:34.950335771 +0000 UTC m=+1.153971177" Jan 23 18:52:39.463501 kubelet[3990]: I0123 18:52:39.463427 3990 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:52:39.464326 containerd[2555]: time="2026-01-23T18:52:39.464182632Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:52:39.464972 kubelet[3990]: I0123 18:52:39.464871 3990 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:52:40.224663 systemd[1]: Created slice kubepods-besteffort-pod8e57897b_29c6_47f6_82b4_b5b9e420aa14.slice - libcontainer container kubepods-besteffort-pod8e57897b_29c6_47f6_82b4_b5b9e420aa14.slice. Jan 23 18:52:40.316991 kubelet[3990]: I0123 18:52:40.316957 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e57897b-29c6-47f6-82b4-b5b9e420aa14-kube-proxy\") pod \"kube-proxy-7p2f7\" (UID: \"8e57897b-29c6-47f6-82b4-b5b9e420aa14\") " pod="kube-system/kube-proxy-7p2f7" Jan 23 18:52:40.316991 kubelet[3990]: I0123 18:52:40.316988 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e57897b-29c6-47f6-82b4-b5b9e420aa14-lib-modules\") pod \"kube-proxy-7p2f7\" (UID: \"8e57897b-29c6-47f6-82b4-b5b9e420aa14\") " pod="kube-system/kube-proxy-7p2f7" Jan 23 18:52:40.317190 kubelet[3990]: I0123 18:52:40.317009 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e57897b-29c6-47f6-82b4-b5b9e420aa14-xtables-lock\") pod \"kube-proxy-7p2f7\" (UID: \"8e57897b-29c6-47f6-82b4-b5b9e420aa14\") " pod="kube-system/kube-proxy-7p2f7" Jan 23 18:52:40.317190 kubelet[3990]: I0123 18:52:40.317028 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4zz\" (UniqueName: \"kubernetes.io/projected/8e57897b-29c6-47f6-82b4-b5b9e420aa14-kube-api-access-gn4zz\") pod \"kube-proxy-7p2f7\" (UID: \"8e57897b-29c6-47f6-82b4-b5b9e420aa14\") " pod="kube-system/kube-proxy-7p2f7" Jan 23 18:52:40.493783 systemd[1]: Created slice kubepods-besteffort-pod5bbc2f8a_42af_47f8_abda_8cfa518ad803.slice - libcontainer container kubepods-besteffort-pod5bbc2f8a_42af_47f8_abda_8cfa518ad803.slice. Jan 23 18:52:40.517840 kubelet[3990]: I0123 18:52:40.517804 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5bbc2f8a-42af-47f8-abda-8cfa518ad803-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2cqvg\" (UID: \"5bbc2f8a-42af-47f8-abda-8cfa518ad803\") " pod="tigera-operator/tigera-operator-7dcd859c48-2cqvg" Jan 23 18:52:40.517840 kubelet[3990]: I0123 18:52:40.517838 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747kv\" (UniqueName: \"kubernetes.io/projected/5bbc2f8a-42af-47f8-abda-8cfa518ad803-kube-api-access-747kv\") pod \"tigera-operator-7dcd859c48-2cqvg\" (UID: \"5bbc2f8a-42af-47f8-abda-8cfa518ad803\") " pod="tigera-operator/tigera-operator-7dcd859c48-2cqvg" Jan 23 18:52:40.531375 containerd[2555]: time="2026-01-23T18:52:40.531340253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p2f7,Uid:8e57897b-29c6-47f6-82b4-b5b9e420aa14,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:40.562826 containerd[2555]: time="2026-01-23T18:52:40.562795191Z" level=info msg="connecting to shim 012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119" address="unix:///run/containerd/s/b0af5de0d3bc682c2ad0e2ad041b8575944aec71ca21d5595029bec66a8f0f1f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:40.586643 systemd[1]: Started cri-containerd-012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119.scope - libcontainer container 012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119. Jan 23 18:52:40.597909 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 18:52:40.597981 kernel: audit: type=1334 audit(1769194360.594:463): prog-id=157 op=LOAD Jan 23 18:52:40.594000 audit: BPF prog-id=157 op=LOAD Jan 23 18:52:40.599495 kernel: audit: type=1334 audit(1769194360.596:464): prog-id=158 op=LOAD Jan 23 18:52:40.599553 kernel: audit: type=1300 audit(1769194360.596:464): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: BPF prog-id=158 op=LOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.609283 kernel: audit: type=1327 audit(1769194360.596:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.596000 audit: BPF prog-id=158 op=UNLOAD Jan 23 18:52:40.612498 kernel: audit: type=1334 audit(1769194360.596:465): prog-id=158 op=UNLOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.617491 kernel: audit: type=1300 audit(1769194360.596:465): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.623581 kernel: audit: type=1327 audit(1769194360.596:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.623641 kernel: audit: type=1334 audit(1769194360.596:466): prog-id=159 op=LOAD Jan 23 18:52:40.596000 audit: BPF prog-id=159 op=LOAD Jan 23 18:52:40.627510 kernel: audit: type=1300 audit(1769194360.596:466): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.631135 kernel: audit: type=1327 audit(1769194360.596:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.596000 audit: BPF prog-id=160 op=LOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.596000 audit: BPF prog-id=160 op=UNLOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.596000 audit: BPF prog-id=159 op=UNLOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.596000 audit: BPF prog-id=161 op=LOAD Jan 23 18:52:40.596000 audit[4056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031326461616636623535633630616262316236306238616336356663 Jan 23 18:52:40.639808 containerd[2555]: time="2026-01-23T18:52:40.639777277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p2f7,Uid:8e57897b-29c6-47f6-82b4-b5b9e420aa14,Namespace:kube-system,Attempt:0,} returns sandbox id \"012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119\"" Jan 23 18:52:40.644470 containerd[2555]: time="2026-01-23T18:52:40.644444296Z" level=info msg="CreateContainer within sandbox \"012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:52:40.661490 containerd[2555]: time="2026-01-23T18:52:40.661445134Z" level=info msg="Container 70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:40.674166 containerd[2555]: time="2026-01-23T18:52:40.674139876Z" level=info msg="CreateContainer within sandbox \"012daaf6b55c60abb1b60b8ac65fc88cc1d3bb06ac43a4539a967b55e9d00119\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843\"" Jan 23 18:52:40.674754 containerd[2555]: time="2026-01-23T18:52:40.674734568Z" level=info msg="StartContainer for \"70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843\"" Jan 23 18:52:40.676250 containerd[2555]: time="2026-01-23T18:52:40.676198579Z" level=info msg="connecting to shim 70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843" address="unix:///run/containerd/s/b0af5de0d3bc682c2ad0e2ad041b8575944aec71ca21d5595029bec66a8f0f1f" protocol=ttrpc version=3 Jan 23 18:52:40.691653 systemd[1]: Started cri-containerd-70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843.scope - libcontainer container 70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843. Jan 23 18:52:40.723000 audit: BPF prog-id=162 op=LOAD Jan 23 18:52:40.723000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4044 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730633862626532653561336433653535373537646239633033346361 Jan 23 18:52:40.723000 audit: BPF prog-id=163 op=LOAD Jan 23 18:52:40.723000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4044 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730633862626532653561336433653535373537646239633033346361 Jan 23 18:52:40.723000 audit: BPF prog-id=163 op=UNLOAD Jan 23 18:52:40.723000 audit[4083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730633862626532653561336433653535373537646239633033346361 Jan 23 18:52:40.723000 audit: BPF prog-id=162 op=UNLOAD Jan 23 18:52:40.723000 audit[4083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730633862626532653561336433653535373537646239633033346361 Jan 23 18:52:40.723000 audit: BPF prog-id=164 op=LOAD Jan 23 18:52:40.723000 audit[4083]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4044 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730633862626532653561336433653535373537646239633033346361 Jan 23 18:52:40.741543 containerd[2555]: time="2026-01-23T18:52:40.741514918Z" level=info msg="StartContainer for \"70c8bbe2e5a3d3e55757db9c034ca8e0757305de2325c697d7f144a6af8a6843\" returns successfully" Jan 23 18:52:40.797950 containerd[2555]: time="2026-01-23T18:52:40.797927185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2cqvg,Uid:5bbc2f8a-42af-47f8-abda-8cfa518ad803,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:52:40.827000 audit[4147]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:40.827000 audit[4147]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc52d436b0 a2=0 a3=7ffc52d4369c items=0 ppid=4095 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:52:40.830000 audit[4148]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=4148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:40.830000 audit[4148]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc682e54b0 a2=0 a3=7ffc682e549c items=0 ppid=4095 pid=4148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:52:40.834000 audit[4153]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=4153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:40.834000 audit[4153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd619f45b0 a2=0 a3=7ffd619f459c items=0 ppid=4095 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:52:40.836000 audit[4157]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.836000 audit[4157]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d8a1360 a2=0 a3=7ffe1d8a134c items=0 ppid=4095 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:52:40.840341 containerd[2555]: time="2026-01-23T18:52:40.839472461Z" level=info msg="connecting to shim 04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b" address="unix:///run/containerd/s/3d5e2ab85219d44eee7fd38d03a470beab0caeae23402842cdbba0297a72a3d6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:40.840000 audit[4159]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.840000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4d2cd870 a2=0 a3=7ffc4d2cd85c items=0 ppid=4095 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:52:40.842000 audit[4169]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.842000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8e8c5bf0 a2=0 a3=7fff8e8c5bdc items=0 ppid=4095 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:52:40.860638 systemd[1]: Started cri-containerd-04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b.scope - libcontainer container 04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b. Jan 23 18:52:40.867000 audit: BPF prog-id=165 op=LOAD Jan 23 18:52:40.867000 audit: BPF prog-id=166 op=LOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=166 op=UNLOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=167 op=LOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=168 op=LOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=168 op=UNLOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=167 op=UNLOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.867000 audit: BPF prog-id=169 op=LOAD Jan 23 18:52:40.867000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4156 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373835663135376333323962666461353337626233663830383731 Jan 23 18:52:40.896745 containerd[2555]: time="2026-01-23T18:52:40.896678525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2cqvg,Uid:5bbc2f8a-42af-47f8-abda-8cfa518ad803,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b\"" Jan 23 18:52:40.898641 containerd[2555]: time="2026-01-23T18:52:40.898623337Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:52:40.932000 audit[4196]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.932000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc5f3564b0 a2=0 a3=7ffc5f35649c items=0 ppid=4095 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:52:40.936000 audit[4198]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.936000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc27e932d0 a2=0 a3=7ffc27e932bc items=0 ppid=4095 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 18:52:40.939000 audit[4201]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.939000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc8cc13a90 a2=0 a3=7ffc8cc13a7c items=0 ppid=4095 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 18:52:40.942000 audit[4202]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.942000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcec867dc0 a2=0 a3=7ffcec867dac items=0 ppid=4095 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:52:40.945000 audit[4204]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.945000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde7538f10 a2=0 a3=7ffde7538efc items=0 ppid=4095 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:52:40.946000 audit[4205]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.946000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4d96ec90 a2=0 a3=7ffe4d96ec7c items=0 ppid=4095 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:52:40.949000 audit[4207]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.949000 audit[4207]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbc777500 a2=0 a3=7ffcbc7774ec items=0 ppid=4095 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:52:40.952000 audit[4210]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.952000 audit[4210]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdbea8dbf0 a2=0 a3=7ffdbea8dbdc items=0 ppid=4095 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 18:52:40.953000 audit[4211]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.953000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4c9afa40 a2=0 a3=7fff4c9afa2c items=0 ppid=4095 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:52:40.955000 audit[4213]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.955000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda555e260 a2=0 a3=7ffda555e24c items=0 ppid=4095 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:52:40.956000 audit[4214]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.956000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff52563480 a2=0 a3=7fff5256346c items=0 ppid=4095 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:52:40.958000 audit[4216]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.958000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1c2106a0 a2=0 a3=7fff1c21068c items=0 ppid=4095 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:52:40.961000 audit[4219]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.961000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9bb03de0 a2=0 a3=7ffc9bb03dcc items=0 ppid=4095 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:52:40.964000 audit[4222]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.964000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3f62b490 a2=0 a3=7ffc3f62b47c items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:52:40.965000 audit[4223]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.965000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf68583b0 a2=0 a3=7ffdf685839c items=0 ppid=4095 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:52:40.967000 audit[4225]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.967000 audit[4225]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffd8e3f230 a2=0 a3=7fffd8e3f21c items=0 ppid=4095 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:52:40.970000 audit[4228]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.970000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc584a7510 a2=0 a3=7ffc584a74fc items=0 ppid=4095 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:52:40.971000 audit[4229]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.971000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4fdbe730 a2=0 a3=7ffc4fdbe71c items=0 ppid=4095 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:52:40.973000 audit[4231]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:52:40.973000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff99496330 a2=0 a3=7fff9949631c items=0 ppid=4095 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:40.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:52:41.117000 audit[4237]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:41.117000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc94b9bd00 a2=0 a3=7ffc94b9bcec items=0 ppid=4095 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:41.127000 audit[4237]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:41.127000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc94b9bd00 a2=0 a3=7ffc94b9bcec items=0 ppid=4095 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:41.128000 audit[4242]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.128000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffce0cac900 a2=0 a3=7ffce0cac8ec items=0 ppid=4095 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:52:41.131000 audit[4244]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.131000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff3a5fb6c0 a2=0 a3=7fff3a5fb6ac items=0 ppid=4095 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 18:52:41.136000 audit[4247]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.136000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffdb204890 a2=0 a3=7fffdb20487c items=0 ppid=4095 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 18:52:41.137000 audit[4248]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.137000 audit[4248]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd25dc8680 a2=0 a3=7ffd25dc866c items=0 ppid=4095 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:52:41.139000 audit[4250]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.139000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9101ae00 a2=0 a3=7ffd9101adec items=0 ppid=4095 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:52:41.140000 audit[4251]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.140000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4575dd90 a2=0 a3=7fff4575dd7c items=0 ppid=4095 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:52:41.142000 audit[4253]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.142000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd57ea9580 a2=0 a3=7ffd57ea956c items=0 ppid=4095 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 18:52:41.145000 audit[4256]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.145000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc1bfe2a50 a2=0 a3=7ffc1bfe2a3c items=0 ppid=4095 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:52:41.146000 audit[4257]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.146000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc4b8490 a2=0 a3=7ffdbc4b847c items=0 ppid=4095 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:52:41.149000 audit[4259]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.149000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa7e125e0 a2=0 a3=7fffa7e125cc items=0 ppid=4095 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:52:41.150000 audit[4260]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.150000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd635688d0 a2=0 a3=7ffd635688bc items=0 ppid=4095 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:52:41.152000 audit[4262]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.152000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbfa641d0 a2=0 a3=7ffcbfa641bc items=0 ppid=4095 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:52:41.155000 audit[4265]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.155000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc06b5a70 a2=0 a3=7ffdc06b5a5c items=0 ppid=4095 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:52:41.158000 audit[4268]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.158000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1bdc7a10 a2=0 a3=7ffe1bdc79fc items=0 ppid=4095 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 18:52:41.160000 audit[4269]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.160000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe651fbb30 a2=0 a3=7ffe651fbb1c items=0 ppid=4095 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:52:41.162000 audit[4271]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.162000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc22d8c7c0 a2=0 a3=7ffc22d8c7ac items=0 ppid=4095 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:52:41.165000 audit[4274]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.165000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef5ccda20 a2=0 a3=7ffef5ccda0c items=0 ppid=4095 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:52:41.166000 audit[4275]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.166000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9e682670 a2=0 a3=7fff9e68265c items=0 ppid=4095 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:52:41.168000 audit[4277]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.168000 audit[4277]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4b450b30 a2=0 a3=7fff4b450b1c items=0 ppid=4095 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:52:41.169000 audit[4278]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.169000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc25a52cb0 a2=0 a3=7ffc25a52c9c items=0 ppid=4095 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:52:41.171000 audit[4280]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.171000 audit[4280]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff340ace40 a2=0 a3=7fff340ace2c items=0 ppid=4095 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:52:41.174000 audit[4283]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:52:41.174000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe60cd8d40 a2=0 a3=7ffe60cd8d2c items=0 ppid=4095 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:52:41.177000 audit[4285]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:52:41.177000 audit[4285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffea71a100 a2=0 a3=7fffea71a0ec items=0 ppid=4095 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.177000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:41.177000 audit[4285]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:52:41.177000 audit[4285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffea71a100 a2=0 a3=7fffea71a0ec items=0 ppid=4095 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:41.177000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:42.070850 kubelet[3990]: I0123 18:52:42.070785 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7p2f7" podStartSLOduration=2.070770412 podStartE2EDuration="2.070770412s" podCreationTimestamp="2026-01-23 18:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:52:40.943404751 +0000 UTC m=+7.147040153" watchObservedRunningTime="2026-01-23 18:52:42.070770412 +0000 UTC m=+8.274405805" Jan 23 18:52:42.272698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801045968.mount: Deactivated successfully. Jan 23 18:52:42.780623 containerd[2555]: time="2026-01-23T18:52:42.780580236Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:42.782850 containerd[2555]: time="2026-01-23T18:52:42.782751388Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 18:52:42.785335 containerd[2555]: time="2026-01-23T18:52:42.785304131Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:42.789143 containerd[2555]: time="2026-01-23T18:52:42.789096940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:42.789572 containerd[2555]: time="2026-01-23T18:52:42.789461195Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.890810988s" Jan 23 18:52:42.789572 containerd[2555]: time="2026-01-23T18:52:42.789502260Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:52:42.791602 containerd[2555]: time="2026-01-23T18:52:42.791573589Z" level=info msg="CreateContainer within sandbox \"04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:52:42.810122 containerd[2555]: time="2026-01-23T18:52:42.809587929Z" level=info msg="Container be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:42.821410 containerd[2555]: time="2026-01-23T18:52:42.821383828Z" level=info msg="CreateContainer within sandbox \"04785f157c329bfda537bb3f80871a835e6c22f9e4e2910e7b3c0c3dd882586b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c\"" Jan 23 18:52:42.821904 containerd[2555]: time="2026-01-23T18:52:42.821877330Z" level=info msg="StartContainer for \"be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c\"" Jan 23 18:52:42.822706 containerd[2555]: time="2026-01-23T18:52:42.822679981Z" level=info msg="connecting to shim be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c" address="unix:///run/containerd/s/3d5e2ab85219d44eee7fd38d03a470beab0caeae23402842cdbba0297a72a3d6" protocol=ttrpc version=3 Jan 23 18:52:42.841657 systemd[1]: Started cri-containerd-be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c.scope - libcontainer container be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c. Jan 23 18:52:42.849000 audit: BPF prog-id=170 op=LOAD Jan 23 18:52:42.849000 audit: BPF prog-id=171 op=LOAD Jan 23 18:52:42.849000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.849000 audit: BPF prog-id=171 op=UNLOAD Jan 23 18:52:42.849000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.850000 audit: BPF prog-id=172 op=LOAD Jan 23 18:52:42.850000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.850000 audit: BPF prog-id=173 op=LOAD Jan 23 18:52:42.850000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.850000 audit: BPF prog-id=173 op=UNLOAD Jan 23 18:52:42.850000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.850000 audit: BPF prog-id=172 op=UNLOAD Jan 23 18:52:42.850000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.850000 audit: BPF prog-id=174 op=LOAD Jan 23 18:52:42.850000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4156 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333032373230313665373464396234373834643666363232363332 Jan 23 18:52:42.866965 containerd[2555]: time="2026-01-23T18:52:42.866938947Z" level=info msg="StartContainer for \"be30272016e74d9b4784d6f6226327e32fa26889ecb6a1d3fe954f295e39539c\" returns successfully" Jan 23 18:52:42.945439 kubelet[3990]: I0123 18:52:42.944866 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2cqvg" podStartSLOduration=1.052248291 podStartE2EDuration="2.944848868s" podCreationTimestamp="2026-01-23 18:52:40 +0000 UTC" firstStartedPulling="2026-01-23 18:52:40.897542051 +0000 UTC m=+7.101177449" lastFinishedPulling="2026-01-23 18:52:42.790142619 +0000 UTC m=+8.993778026" observedRunningTime="2026-01-23 18:52:42.944848108 +0000 UTC m=+9.148483510" watchObservedRunningTime="2026-01-23 18:52:42.944848868 +0000 UTC m=+9.148484270" Jan 23 18:52:48.482144 sudo[3012]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:48.481000 audit[3012]: USER_END pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:48.487918 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 18:52:48.487996 kernel: audit: type=1106 audit(1769194368.481:543): pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:48.481000 audit[3012]: CRED_DISP pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:48.493505 kernel: audit: type=1104 audit(1769194368.481:544): pid=3012 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:52:48.594584 sshd[3011]: Connection closed by 10.200.16.10 port 60958 Jan 23 18:52:48.595231 sshd-session[3007]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:48.603538 kernel: audit: type=1106 audit(1769194368.595:545): pid=3007 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:48.595000 audit[3007]: USER_END pid=3007 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:48.606227 systemd[1]: sshd@6-10.200.8.14:22-10.200.16.10:60958.service: Deactivated successfully. Jan 23 18:52:48.612091 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:52:48.612550 systemd[1]: session-10.scope: Consumed 3.294s CPU time, 230.8M memory peak. Jan 23 18:52:48.602000 audit[3007]: CRED_DISP pid=3007 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:48.620416 kernel: audit: type=1104 audit(1769194368.602:546): pid=3007 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:52:48.622196 systemd-logind[2501]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:52:48.623924 systemd-logind[2501]: Removed session 10. Jan 23 18:52:48.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:60958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:48.631493 kernel: audit: type=1131 audit(1769194368.605:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:60958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:52:49.546000 audit[4378]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.551519 kernel: audit: type=1325 audit(1769194369.546:548): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.546000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff5eb23200 a2=0 a3=7fff5eb231ec items=0 ppid=4095 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.559554 kernel: audit: type=1300 audit(1769194369.546:548): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff5eb23200 a2=0 a3=7fff5eb231ec items=0 ppid=4095 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:49.564491 kernel: audit: type=1327 audit(1769194369.546:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:49.559000 audit[4378]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.569495 kernel: audit: type=1325 audit(1769194369.559:549): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.559000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5eb23200 a2=0 a3=0 items=0 ppid=4095 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:49.576518 kernel: audit: type=1300 audit(1769194369.559:549): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5eb23200 a2=0 a3=0 items=0 ppid=4095 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.582000 audit[4380]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.582000 audit[4380]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff41b52b30 a2=0 a3=7fff41b52b1c items=0 ppid=4095 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:49.585000 audit[4380]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:49.585000 audit[4380]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff41b52b30 a2=0 a3=0 items=0 ppid=4095 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:49.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:49.784499 waagent[2726]: 2026-01-23T18:52:49.784396Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 18:52:49.796309 waagent[2726]: 2026-01-23T18:52:49.795088Z INFO ExtHandler Jan 23 18:52:49.796309 waagent[2726]: 2026-01-23T18:52:49.795174Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b09e22ec-841b-4108-a5dc-5ca7cc509b22 eTag: 10298975568112998345 source: Fabric] Jan 23 18:52:49.796500 waagent[2726]: 2026-01-23T18:52:49.795468Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:52:49.797226 waagent[2726]: 2026-01-23T18:52:49.797196Z INFO ExtHandler Jan 23 18:52:49.797333 waagent[2726]: 2026-01-23T18:52:49.797316Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 18:52:49.848153 waagent[2726]: 2026-01-23T18:52:49.847201Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:52:49.952603 waagent[2726]: 2026-01-23T18:52:49.952550Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DB4942BE721893C2DEDD11D7F902A5E0B5D8FB39', 'hasPrivateKey': True} Jan 23 18:52:49.953015 waagent[2726]: 2026-01-23T18:52:49.952982Z INFO ExtHandler Fetch goal state completed Jan 23 18:52:49.953358 waagent[2726]: 2026-01-23T18:52:49.953329Z INFO ExtHandler ExtHandler Jan 23 18:52:49.953518 waagent[2726]: 2026-01-23T18:52:49.953396Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: dfa20db1-ae63-4656-8cd7-91a08610d930 correlation 53a62e73-5972-4485-a199-608bcd8f9bfd created: 2026-01-23T18:52:42.287323Z] Jan 23 18:52:49.954628 waagent[2726]: 2026-01-23T18:52:49.954507Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:52:49.955979 waagent[2726]: 2026-01-23T18:52:49.955946Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Jan 23 18:52:51.322000 audit[4387]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:51.322000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd11fc44d0 a2=0 a3=7ffd11fc44bc items=0 ppid=4095 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:51.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:51.331000 audit[4387]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:51.331000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd11fc44d0 a2=0 a3=0 items=0 ppid=4095 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:51.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:51.520000 audit[4389]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:51.520000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5b942460 a2=0 a3=7fff5b94244c items=0 ppid=4095 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:51.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:51.524000 audit[4389]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:51.524000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5b942460 a2=0 a3=0 items=0 ppid=4095 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:51.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:52.534000 audit[4391]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:52.534000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdcac82a10 a2=0 a3=7ffdcac829fc items=0 ppid=4095 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:52.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:52.539000 audit[4391]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:52.539000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcac82a10 a2=0 a3=0 items=0 ppid=4095 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:52.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:52.993377 systemd[1]: Created slice kubepods-besteffort-poda101a6fd_c1d5_4e57_8dbe_6a4da46cad49.slice - libcontainer container kubepods-besteffort-poda101a6fd_c1d5_4e57_8dbe_6a4da46cad49.slice. Jan 23 18:52:53.000773 kubelet[3990]: I0123 18:52:53.000742 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4vf\" (UniqueName: \"kubernetes.io/projected/a101a6fd-c1d5-4e57-8dbe-6a4da46cad49-kube-api-access-6x4vf\") pod \"calico-typha-6494dfcd8d-7vhbz\" (UID: \"a101a6fd-c1d5-4e57-8dbe-6a4da46cad49\") " pod="calico-system/calico-typha-6494dfcd8d-7vhbz" Jan 23 18:52:53.001071 kubelet[3990]: I0123 18:52:53.000786 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a101a6fd-c1d5-4e57-8dbe-6a4da46cad49-tigera-ca-bundle\") pod \"calico-typha-6494dfcd8d-7vhbz\" (UID: \"a101a6fd-c1d5-4e57-8dbe-6a4da46cad49\") " pod="calico-system/calico-typha-6494dfcd8d-7vhbz" Jan 23 18:52:53.001071 kubelet[3990]: I0123 18:52:53.000805 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a101a6fd-c1d5-4e57-8dbe-6a4da46cad49-typha-certs\") pod \"calico-typha-6494dfcd8d-7vhbz\" (UID: \"a101a6fd-c1d5-4e57-8dbe-6a4da46cad49\") " pod="calico-system/calico-typha-6494dfcd8d-7vhbz" Jan 23 18:52:53.169117 systemd[1]: Created slice kubepods-besteffort-pod5361c453_2c10_4c8f_a94c_fffa451baddb.slice - libcontainer container kubepods-besteffort-pod5361c453_2c10_4c8f_a94c_fffa451baddb.slice. Jan 23 18:52:53.202214 kubelet[3990]: I0123 18:52:53.202186 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5361c453-2c10-4c8f-a94c-fffa451baddb-node-certs\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202424 kubelet[3990]: I0123 18:52:53.202222 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-var-lib-calico\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202424 kubelet[3990]: I0123 18:52:53.202240 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-cni-log-dir\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202424 kubelet[3990]: I0123 18:52:53.202255 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-cni-net-dir\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202424 kubelet[3990]: I0123 18:52:53.202271 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-lib-modules\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202424 kubelet[3990]: I0123 18:52:53.202289 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-xtables-lock\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202569 kubelet[3990]: I0123 18:52:53.202315 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdbh\" (UniqueName: \"kubernetes.io/projected/5361c453-2c10-4c8f-a94c-fffa451baddb-kube-api-access-xtdbh\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202569 kubelet[3990]: I0123 18:52:53.202337 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-flexvol-driver-host\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202569 kubelet[3990]: I0123 18:52:53.202351 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-var-run-calico\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202569 kubelet[3990]: I0123 18:52:53.202373 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5361c453-2c10-4c8f-a94c-fffa451baddb-tigera-ca-bundle\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202569 kubelet[3990]: I0123 18:52:53.202389 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-cni-bin-dir\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.202646 kubelet[3990]: I0123 18:52:53.202409 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5361c453-2c10-4c8f-a94c-fffa451baddb-policysync\") pod \"calico-node-jrz5r\" (UID: \"5361c453-2c10-4c8f-a94c-fffa451baddb\") " pod="calico-system/calico-node-jrz5r" Jan 23 18:52:53.299414 containerd[2555]: time="2026-01-23T18:52:53.299334342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6494dfcd8d-7vhbz,Uid:a101a6fd-c1d5-4e57-8dbe-6a4da46cad49,Namespace:calico-system,Attempt:0,}" Jan 23 18:52:53.304203 kubelet[3990]: E0123 18:52:53.303832 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.304203 kubelet[3990]: W0123 18:52:53.303853 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.304203 kubelet[3990]: E0123 18:52:53.303885 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.304203 kubelet[3990]: E0123 18:52:53.304026 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.304203 kubelet[3990]: W0123 18:52:53.304049 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.304203 kubelet[3990]: E0123 18:52:53.304061 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305013 kubelet[3990]: E0123 18:52:53.304636 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.305013 kubelet[3990]: W0123 18:52:53.304684 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.305013 kubelet[3990]: E0123 18:52:53.304700 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305013 kubelet[3990]: E0123 18:52:53.304991 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.305013 kubelet[3990]: W0123 18:52:53.305001 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.305013 kubelet[3990]: E0123 18:52:53.305013 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305286 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.305823 kubelet[3990]: W0123 18:52:53.305295 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305310 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305472 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.305823 kubelet[3990]: W0123 18:52:53.305501 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305521 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305671 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.305823 kubelet[3990]: W0123 18:52:53.305678 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305713 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.305823 kubelet[3990]: E0123 18:52:53.305789 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.306186 kubelet[3990]: W0123 18:52:53.305795 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.306186 kubelet[3990]: E0123 18:52:53.305844 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.306186 kubelet[3990]: E0123 18:52:53.305934 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.306186 kubelet[3990]: W0123 18:52:53.305940 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.306186 kubelet[3990]: E0123 18:52:53.306031 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.306186 kubelet[3990]: W0123 18:52:53.306036 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.306186 kubelet[3990]: E0123 18:52:53.306044 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.306186 kubelet[3990]: E0123 18:52:53.306183 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.306186 kubelet[3990]: W0123 18:52:53.306188 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.306395 kubelet[3990]: E0123 18:52:53.306194 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.307329 kubelet[3990]: E0123 18:52:53.307019 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.307329 kubelet[3990]: W0123 18:52:53.307034 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.307329 kubelet[3990]: E0123 18:52:53.307048 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.307329 kubelet[3990]: E0123 18:52:53.307181 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.308029 kubelet[3990]: E0123 18:52:53.307764 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.308029 kubelet[3990]: W0123 18:52:53.307776 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.308029 kubelet[3990]: E0123 18:52:53.307896 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.308758 kubelet[3990]: E0123 18:52:53.308647 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.308758 kubelet[3990]: W0123 18:52:53.308743 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.308993 kubelet[3990]: E0123 18:52:53.308940 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.310229 kubelet[3990]: E0123 18:52:53.310191 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.310229 kubelet[3990]: W0123 18:52:53.310207 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.313371 kubelet[3990]: E0123 18:52:53.310311 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.313921 kubelet[3990]: E0123 18:52:53.313882 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.314004 kubelet[3990]: W0123 18:52:53.313994 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.314397 kubelet[3990]: E0123 18:52:53.314382 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.314888 kubelet[3990]: E0123 18:52:53.314775 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.314888 kubelet[3990]: W0123 18:52:53.314784 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.314959 kubelet[3990]: E0123 18:52:53.314919 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.315362 kubelet[3990]: E0123 18:52:53.315301 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.315362 kubelet[3990]: W0123 18:52:53.315316 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.315362 kubelet[3990]: E0123 18:52:53.315327 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.320190 kubelet[3990]: E0123 18:52:53.320168 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.320190 kubelet[3990]: W0123 18:52:53.320183 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.320190 kubelet[3990]: E0123 18:52:53.320194 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.347052 containerd[2555]: time="2026-01-23T18:52:53.347022010Z" level=info msg="connecting to shim e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b" address="unix:///run/containerd/s/8d0269818b393ebd6ebdafe295729eb48782ae290cfb66d8fa428efa81b5eee1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:53.371897 kubelet[3990]: E0123 18:52:53.369790 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:52:53.379498 systemd[1]: Started cri-containerd-e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b.scope - libcontainer container e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b. Jan 23 18:52:53.389905 kubelet[3990]: E0123 18:52:53.389882 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.389905 kubelet[3990]: W0123 18:52:53.389904 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390163 kubelet[3990]: E0123 18:52:53.389917 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390163 kubelet[3990]: E0123 18:52:53.390029 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390163 kubelet[3990]: W0123 18:52:53.390035 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390163 kubelet[3990]: E0123 18:52:53.390042 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390163 kubelet[3990]: E0123 18:52:53.390136 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390163 kubelet[3990]: W0123 18:52:53.390141 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390163 kubelet[3990]: E0123 18:52:53.390148 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390332 kubelet[3990]: E0123 18:52:53.390274 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390332 kubelet[3990]: W0123 18:52:53.390279 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390332 kubelet[3990]: E0123 18:52:53.390286 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390397 kubelet[3990]: E0123 18:52:53.390383 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390397 kubelet[3990]: W0123 18:52:53.390387 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390397 kubelet[3990]: E0123 18:52:53.390393 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390903 kubelet[3990]: E0123 18:52:53.390786 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390903 kubelet[3990]: W0123 18:52:53.390799 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390903 kubelet[3990]: E0123 18:52:53.390810 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.390996 kubelet[3990]: E0123 18:52:53.390925 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.390996 kubelet[3990]: W0123 18:52:53.390930 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.390996 kubelet[3990]: E0123 18:52:53.390938 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.391299 kubelet[3990]: E0123 18:52:53.391124 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.391299 kubelet[3990]: W0123 18:52:53.391131 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.391299 kubelet[3990]: E0123 18:52:53.391139 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.391299 kubelet[3990]: E0123 18:52:53.391251 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.391299 kubelet[3990]: W0123 18:52:53.391256 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.391299 kubelet[3990]: E0123 18:52:53.391262 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.391537 kubelet[3990]: E0123 18:52:53.391526 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.391537 kubelet[3990]: W0123 18:52:53.391537 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.391595 kubelet[3990]: E0123 18:52:53.391547 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.391664 kubelet[3990]: E0123 18:52:53.391655 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.391689 kubelet[3990]: W0123 18:52:53.391664 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.391689 kubelet[3990]: E0123 18:52:53.391673 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.391801 kubelet[3990]: E0123 18:52:53.391793 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.391883 kubelet[3990]: W0123 18:52:53.391836 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.391883 kubelet[3990]: E0123 18:52:53.391846 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392014 kubelet[3990]: E0123 18:52:53.392008 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392052 kubelet[3990]: W0123 18:52:53.392042 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392103 kubelet[3990]: E0123 18:52:53.392081 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392223 kubelet[3990]: E0123 18:52:53.392217 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392287 kubelet[3990]: W0123 18:52:53.392255 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392287 kubelet[3990]: E0123 18:52:53.392264 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392429 kubelet[3990]: E0123 18:52:53.392419 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392455 kubelet[3990]: W0123 18:52:53.392429 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392455 kubelet[3990]: E0123 18:52:53.392438 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392575 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392881 kubelet[3990]: W0123 18:52:53.392582 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392590 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392695 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392881 kubelet[3990]: W0123 18:52:53.392699 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392705 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392780 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.392881 kubelet[3990]: W0123 18:52:53.392785 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392792 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.392881 kubelet[3990]: E0123 18:52:53.392864 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.393107 kubelet[3990]: W0123 18:52:53.392869 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.393107 kubelet[3990]: E0123 18:52:53.392875 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.393107 kubelet[3990]: E0123 18:52:53.392965 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.393107 kubelet[3990]: W0123 18:52:53.392970 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.393107 kubelet[3990]: E0123 18:52:53.392976 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.399000 audit: BPF prog-id=175 op=LOAD Jan 23 18:52:53.400000 audit: BPF prog-id=176 op=LOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=176 op=UNLOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=177 op=LOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=178 op=LOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=178 op=UNLOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=177 op=UNLOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.400000 audit: BPF prog-id=179 op=LOAD Jan 23 18:52:53.400000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4422 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633338383633323534386134643639643263313763393835626334 Jan 23 18:52:53.403706 kubelet[3990]: E0123 18:52:53.403590 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.403706 kubelet[3990]: W0123 18:52:53.403603 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.403706 kubelet[3990]: E0123 18:52:53.403615 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.403706 kubelet[3990]: I0123 18:52:53.403638 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad1b7350-c4c8-43d5-adb7-51075adcd4fd-varrun\") pod \"csi-node-driver-slbmv\" (UID: \"ad1b7350-c4c8-43d5-adb7-51075adcd4fd\") " pod="calico-system/csi-node-driver-slbmv" Jan 23 18:52:53.403894 kubelet[3990]: E0123 18:52:53.403885 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.404004 kubelet[3990]: W0123 18:52:53.403939 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.404004 kubelet[3990]: E0123 18:52:53.403955 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.404004 kubelet[3990]: I0123 18:52:53.403970 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b7350-c4c8-43d5-adb7-51075adcd4fd-socket-dir\") pod \"csi-node-driver-slbmv\" (UID: \"ad1b7350-c4c8-43d5-adb7-51075adcd4fd\") " pod="calico-system/csi-node-driver-slbmv" Jan 23 18:52:53.404271 kubelet[3990]: E0123 18:52:53.404190 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.404271 kubelet[3990]: W0123 18:52:53.404201 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.404271 kubelet[3990]: E0123 18:52:53.404214 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.404401 kubelet[3990]: E0123 18:52:53.404395 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.404495 kubelet[3990]: W0123 18:52:53.404445 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.404495 kubelet[3990]: E0123 18:52:53.404458 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.404686 kubelet[3990]: E0123 18:52:53.404668 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.404686 kubelet[3990]: W0123 18:52:53.404676 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.404827 kubelet[3990]: E0123 18:52:53.404746 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.404827 kubelet[3990]: I0123 18:52:53.404766 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b7350-c4c8-43d5-adb7-51075adcd4fd-registration-dir\") pod \"csi-node-driver-slbmv\" (UID: \"ad1b7350-c4c8-43d5-adb7-51075adcd4fd\") " pod="calico-system/csi-node-driver-slbmv" Jan 23 18:52:53.404918 kubelet[3990]: E0123 18:52:53.404911 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.404976 kubelet[3990]: W0123 18:52:53.404970 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.405030 kubelet[3990]: E0123 18:52:53.405022 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.405160 kubelet[3990]: I0123 18:52:53.405140 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4rs\" (UniqueName: \"kubernetes.io/projected/ad1b7350-c4c8-43d5-adb7-51075adcd4fd-kube-api-access-pm4rs\") pod \"csi-node-driver-slbmv\" (UID: \"ad1b7350-c4c8-43d5-adb7-51075adcd4fd\") " pod="calico-system/csi-node-driver-slbmv" Jan 23 18:52:53.405246 kubelet[3990]: E0123 18:52:53.405230 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.405246 kubelet[3990]: W0123 18:52:53.405238 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.405333 kubelet[3990]: E0123 18:52:53.405309 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.405528 kubelet[3990]: E0123 18:52:53.405513 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.405528 kubelet[3990]: W0123 18:52:53.405520 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.405647 kubelet[3990]: E0123 18:52:53.405602 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.405778 kubelet[3990]: E0123 18:52:53.405772 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.405818 kubelet[3990]: W0123 18:52:53.405809 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.405860 kubelet[3990]: E0123 18:52:53.405853 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.406006 kubelet[3990]: E0123 18:52:53.406000 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.406093 kubelet[3990]: W0123 18:52:53.406050 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.406093 kubelet[3990]: E0123 18:52:53.406066 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.406259 kubelet[3990]: E0123 18:52:53.406243 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.406259 kubelet[3990]: W0123 18:52:53.406250 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.406375 kubelet[3990]: E0123 18:52:53.406321 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.406375 kubelet[3990]: I0123 18:52:53.406340 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b7350-c4c8-43d5-adb7-51075adcd4fd-kubelet-dir\") pod \"csi-node-driver-slbmv\" (UID: \"ad1b7350-c4c8-43d5-adb7-51075adcd4fd\") " pod="calico-system/csi-node-driver-slbmv" Jan 23 18:52:53.406803 kubelet[3990]: E0123 18:52:53.406785 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.406908 kubelet[3990]: W0123 18:52:53.406854 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.406957 kubelet[3990]: E0123 18:52:53.406947 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.407099 kubelet[3990]: E0123 18:52:53.407092 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.407160 kubelet[3990]: W0123 18:52:53.407149 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.407234 kubelet[3990]: E0123 18:52:53.407196 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.407413 kubelet[3990]: E0123 18:52:53.407399 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.407448 kubelet[3990]: W0123 18:52:53.407413 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.407448 kubelet[3990]: E0123 18:52:53.407424 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.407823 kubelet[3990]: E0123 18:52:53.407586 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.407823 kubelet[3990]: W0123 18:52:53.407593 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.407823 kubelet[3990]: E0123 18:52:53.407601 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.433925 containerd[2555]: time="2026-01-23T18:52:53.433894204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6494dfcd8d-7vhbz,Uid:a101a6fd-c1d5-4e57-8dbe-6a4da46cad49,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b\"" Jan 23 18:52:53.435672 containerd[2555]: time="2026-01-23T18:52:53.435636326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:52:53.472467 containerd[2555]: time="2026-01-23T18:52:53.472437636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jrz5r,Uid:5361c453-2c10-4c8f-a94c-fffa451baddb,Namespace:calico-system,Attempt:0,}" Jan 23 18:52:53.506962 kubelet[3990]: E0123 18:52:53.506943 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.506962 kubelet[3990]: W0123 18:52:53.506958 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507072 kubelet[3990]: E0123 18:52:53.506971 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507164 kubelet[3990]: E0123 18:52:53.507151 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507164 kubelet[3990]: W0123 18:52:53.507160 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507238 kubelet[3990]: E0123 18:52:53.507177 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507347 kubelet[3990]: E0123 18:52:53.507335 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507384 kubelet[3990]: W0123 18:52:53.507347 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507384 kubelet[3990]: E0123 18:52:53.507366 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507550 kubelet[3990]: E0123 18:52:53.507538 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507550 kubelet[3990]: W0123 18:52:53.507547 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507651 kubelet[3990]: E0123 18:52:53.507557 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507707 kubelet[3990]: E0123 18:52:53.507695 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507707 kubelet[3990]: W0123 18:52:53.507703 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507763 kubelet[3990]: E0123 18:52:53.507713 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507846 kubelet[3990]: E0123 18:52:53.507833 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507846 kubelet[3990]: W0123 18:52:53.507842 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.507959 kubelet[3990]: E0123 18:52:53.507853 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.507959 kubelet[3990]: E0123 18:52:53.507947 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.507959 kubelet[3990]: W0123 18:52:53.507953 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508029 kubelet[3990]: E0123 18:52:53.507959 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508105 kubelet[3990]: E0123 18:52:53.508093 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508105 kubelet[3990]: W0123 18:52:53.508102 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508203 kubelet[3990]: E0123 18:52:53.508111 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508233 kubelet[3990]: E0123 18:52:53.508228 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508255 kubelet[3990]: W0123 18:52:53.508234 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508255 kubelet[3990]: E0123 18:52:53.508249 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508360 kubelet[3990]: E0123 18:52:53.508348 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508360 kubelet[3990]: W0123 18:52:53.508356 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508455 kubelet[3990]: E0123 18:52:53.508370 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508504 kubelet[3990]: E0123 18:52:53.508463 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508504 kubelet[3990]: W0123 18:52:53.508468 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508504 kubelet[3990]: E0123 18:52:53.508495 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508672 kubelet[3990]: E0123 18:52:53.508662 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508672 kubelet[3990]: W0123 18:52:53.508671 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508723 kubelet[3990]: E0123 18:52:53.508713 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508817 kubelet[3990]: E0123 18:52:53.508807 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508817 kubelet[3990]: W0123 18:52:53.508814 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.508865 kubelet[3990]: E0123 18:52:53.508847 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.508953 kubelet[3990]: E0123 18:52:53.508943 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.508953 kubelet[3990]: W0123 18:52:53.508950 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509031 kubelet[3990]: E0123 18:52:53.509004 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509074 kubelet[3990]: E0123 18:52:53.509064 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509074 kubelet[3990]: W0123 18:52:53.509073 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509139 kubelet[3990]: E0123 18:52:53.509130 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509223 kubelet[3990]: E0123 18:52:53.509212 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509223 kubelet[3990]: W0123 18:52:53.509220 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509269 kubelet[3990]: E0123 18:52:53.509229 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509367 kubelet[3990]: E0123 18:52:53.509357 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509367 kubelet[3990]: W0123 18:52:53.509364 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509415 kubelet[3990]: E0123 18:52:53.509376 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509560 kubelet[3990]: E0123 18:52:53.509552 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509596 kubelet[3990]: W0123 18:52:53.509585 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509634 kubelet[3990]: E0123 18:52:53.509602 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509780 kubelet[3990]: E0123 18:52:53.509747 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509780 kubelet[3990]: W0123 18:52:53.509753 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509780 kubelet[3990]: E0123 18:52:53.509762 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.509910 kubelet[3990]: E0123 18:52:53.509873 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.509910 kubelet[3990]: W0123 18:52:53.509878 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.509910 kubelet[3990]: E0123 18:52:53.509892 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.510073 kubelet[3990]: E0123 18:52:53.510054 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.510073 kubelet[3990]: W0123 18:52:53.510063 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.510135 kubelet[3990]: E0123 18:52:53.510077 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.510208 kubelet[3990]: E0123 18:52:53.510190 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.510208 kubelet[3990]: W0123 18:52:53.510196 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.510208 kubelet[3990]: E0123 18:52:53.510206 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.510348 kubelet[3990]: E0123 18:52:53.510295 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.510348 kubelet[3990]: W0123 18:52:53.510300 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.510348 kubelet[3990]: E0123 18:52:53.510313 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.510508 kubelet[3990]: E0123 18:52:53.510464 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.510508 kubelet[3990]: W0123 18:52:53.510469 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.510627 kubelet[3990]: E0123 18:52:53.510560 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.510714 kubelet[3990]: E0123 18:52:53.510705 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.510744 kubelet[3990]: W0123 18:52:53.510716 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.510744 kubelet[3990]: E0123 18:52:53.510725 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.515848 kubelet[3990]: E0123 18:52:53.515831 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:53.515848 kubelet[3990]: W0123 18:52:53.515844 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:53.515930 kubelet[3990]: E0123 18:52:53.515855 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:53.545887 containerd[2555]: time="2026-01-23T18:52:53.545842494Z" level=info msg="connecting to shim dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d" address="unix:///run/containerd/s/8878ed9be2a73299828ff5822709fb0191c43f775986fcbef416a705f9d862fd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:53.551000 audit[4551]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:53.552999 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 23 18:52:53.553044 kernel: audit: type=1325 audit(1769194373.551:566): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:53.551000 audit[4551]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca2619880 a2=0 a3=7ffca261986c items=0 ppid=4095 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.568197 kernel: audit: type=1300 audit(1769194373.551:566): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca2619880 a2=0 a3=7ffca261986c items=0 ppid=4095 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.568276 kernel: audit: type=1327 audit(1769194373.551:566): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:53.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:53.573008 kernel: audit: type=1325 audit(1769194373.559:567): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:53.559000 audit[4551]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:52:53.559000 audit[4551]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca2619880 a2=0 a3=0 items=0 ppid=4095 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:53.585676 kernel: audit: type=1300 audit(1769194373.559:567): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca2619880 a2=0 a3=0 items=0 ppid=4095 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.585722 kernel: audit: type=1327 audit(1769194373.559:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:52:53.586803 systemd[1]: Started cri-containerd-dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d.scope - libcontainer container dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d. Jan 23 18:52:53.598000 audit: BPF prog-id=180 op=LOAD Jan 23 18:52:53.600000 audit: BPF prog-id=181 op=LOAD Jan 23 18:52:53.602428 kernel: audit: type=1334 audit(1769194373.598:568): prog-id=180 op=LOAD Jan 23 18:52:53.602469 kernel: audit: type=1334 audit(1769194373.600:569): prog-id=181 op=LOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.607409 kernel: audit: type=1300 audit(1769194373.600:569): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.613320 kernel: audit: type=1327 audit(1769194373.600:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=181 op=UNLOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=182 op=LOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=183 op=LOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=183 op=UNLOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=182 op=UNLOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.600000 audit: BPF prog-id=184 op=LOAD Jan 23 18:52:53.600000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4541 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:53.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464323363313534323934373763363539613836396264383932633562 Jan 23 18:52:53.628447 containerd[2555]: time="2026-01-23T18:52:53.628375363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jrz5r,Uid:5361c453-2c10-4c8f-a94c-fffa451baddb,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\"" Jan 23 18:52:54.805059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210318487.mount: Deactivated successfully. Jan 23 18:52:54.891936 kubelet[3990]: E0123 18:52:54.891902 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:52:55.839898 containerd[2555]: time="2026-01-23T18:52:55.839856376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:55.842070 containerd[2555]: time="2026-01-23T18:52:55.842033818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 23 18:52:55.844538 containerd[2555]: time="2026-01-23T18:52:55.844499000Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:55.848620 containerd[2555]: time="2026-01-23T18:52:55.848563597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:55.850246 containerd[2555]: time="2026-01-23T18:52:55.850189371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.414505629s" Jan 23 18:52:55.850246 containerd[2555]: time="2026-01-23T18:52:55.850227468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:52:55.854494 containerd[2555]: time="2026-01-23T18:52:55.853106883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:52:55.868414 containerd[2555]: time="2026-01-23T18:52:55.868392186Z" level=info msg="CreateContainer within sandbox \"e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:52:55.885658 containerd[2555]: time="2026-01-23T18:52:55.885634987Z" level=info msg="Container e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:55.899082 containerd[2555]: time="2026-01-23T18:52:55.899046639Z" level=info msg="CreateContainer within sandbox \"e7c388632548a4d69d2c17c985bc41f12829eb3bd961e2ff45f87137c513017b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07\"" Jan 23 18:52:55.899455 containerd[2555]: time="2026-01-23T18:52:55.899437151Z" level=info msg="StartContainer for \"e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07\"" Jan 23 18:52:55.900599 containerd[2555]: time="2026-01-23T18:52:55.900533300Z" level=info msg="connecting to shim e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07" address="unix:///run/containerd/s/8d0269818b393ebd6ebdafe295729eb48782ae290cfb66d8fa428efa81b5eee1" protocol=ttrpc version=3 Jan 23 18:52:55.920631 systemd[1]: Started cri-containerd-e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07.scope - libcontainer container e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07. Jan 23 18:52:55.931000 audit: BPF prog-id=185 op=LOAD Jan 23 18:52:55.932000 audit: BPF prog-id=186 op=LOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=186 op=UNLOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=187 op=LOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=188 op=LOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=188 op=UNLOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=187 op=UNLOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.932000 audit: BPF prog-id=189 op=LOAD Jan 23 18:52:55.932000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4422 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:55.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532633563313839363539303435373439393933633636383963346337 Jan 23 18:52:55.969494 containerd[2555]: time="2026-01-23T18:52:55.969427401Z" level=info msg="StartContainer for \"e2c5c189659045749993c6689c4c70895faf30ebb9903823f1d4868d08ef3f07\" returns successfully" Jan 23 18:52:56.891913 kubelet[3990]: E0123 18:52:56.891865 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:52:57.015903 kubelet[3990]: E0123 18:52:57.015877 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.015903 kubelet[3990]: W0123 18:52:57.015900 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016059 kubelet[3990]: E0123 18:52:57.015917 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016059 kubelet[3990]: E0123 18:52:57.016036 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016059 kubelet[3990]: W0123 18:52:57.016044 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016059 kubelet[3990]: E0123 18:52:57.016052 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016178 kubelet[3990]: E0123 18:52:57.016172 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016203 kubelet[3990]: W0123 18:52:57.016178 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016203 kubelet[3990]: E0123 18:52:57.016186 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016321 kubelet[3990]: E0123 18:52:57.016312 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016361 kubelet[3990]: W0123 18:52:57.016347 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016411 kubelet[3990]: E0123 18:52:57.016359 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016519 kubelet[3990]: E0123 18:52:57.016507 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016519 kubelet[3990]: W0123 18:52:57.016516 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016573 kubelet[3990]: E0123 18:52:57.016524 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016633 kubelet[3990]: E0123 18:52:57.016623 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016633 kubelet[3990]: W0123 18:52:57.016631 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016700 kubelet[3990]: E0123 18:52:57.016637 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016736 kubelet[3990]: E0123 18:52:57.016729 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016770 kubelet[3990]: W0123 18:52:57.016737 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016770 kubelet[3990]: E0123 18:52:57.016743 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.016886 kubelet[3990]: E0123 18:52:57.016875 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.016886 kubelet[3990]: W0123 18:52:57.016884 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.016950 kubelet[3990]: E0123 18:52:57.016891 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017015 kubelet[3990]: E0123 18:52:57.017004 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017015 kubelet[3990]: W0123 18:52:57.017011 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017086 kubelet[3990]: E0123 18:52:57.017018 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017119 kubelet[3990]: E0123 18:52:57.017101 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017119 kubelet[3990]: W0123 18:52:57.017106 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017119 kubelet[3990]: E0123 18:52:57.017113 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017236 kubelet[3990]: E0123 18:52:57.017196 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017236 kubelet[3990]: W0123 18:52:57.017201 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017236 kubelet[3990]: E0123 18:52:57.017207 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017332 kubelet[3990]: E0123 18:52:57.017285 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017332 kubelet[3990]: W0123 18:52:57.017290 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017332 kubelet[3990]: E0123 18:52:57.017296 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017436 kubelet[3990]: E0123 18:52:57.017378 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017436 kubelet[3990]: W0123 18:52:57.017384 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017436 kubelet[3990]: E0123 18:52:57.017390 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017558 kubelet[3990]: E0123 18:52:57.017491 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017558 kubelet[3990]: W0123 18:52:57.017496 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017558 kubelet[3990]: E0123 18:52:57.017501 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.017646 kubelet[3990]: E0123 18:52:57.017597 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.017646 kubelet[3990]: W0123 18:52:57.017602 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.017646 kubelet[3990]: E0123 18:52:57.017608 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.034988 kubelet[3990]: E0123 18:52:57.034965 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.034988 kubelet[3990]: W0123 18:52:57.034982 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.035122 kubelet[3990]: E0123 18:52:57.035004 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.035360 kubelet[3990]: E0123 18:52:57.035234 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.035360 kubelet[3990]: W0123 18:52:57.035263 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.035360 kubelet[3990]: E0123 18:52:57.035280 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.035538 kubelet[3990]: E0123 18:52:57.035525 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.035538 kubelet[3990]: W0123 18:52:57.035536 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.035588 kubelet[3990]: E0123 18:52:57.035547 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.035739 kubelet[3990]: E0123 18:52:57.035718 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.035739 kubelet[3990]: W0123 18:52:57.035736 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.035796 kubelet[3990]: E0123 18:52:57.035748 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.035899 kubelet[3990]: E0123 18:52:57.035885 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.035943 kubelet[3990]: W0123 18:52:57.035931 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.035991 kubelet[3990]: E0123 18:52:57.035944 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.036103 kubelet[3990]: E0123 18:52:57.036091 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.036103 kubelet[3990]: W0123 18:52:57.036100 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.036179 kubelet[3990]: E0123 18:52:57.036114 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.037496 kubelet[3990]: E0123 18:52:57.036746 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.037496 kubelet[3990]: W0123 18:52:57.036763 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.037496 kubelet[3990]: E0123 18:52:57.036778 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.040614 kubelet[3990]: E0123 18:52:57.040599 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.040702 kubelet[3990]: W0123 18:52:57.040689 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.040746 kubelet[3990]: E0123 18:52:57.040738 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.042646 kubelet[3990]: E0123 18:52:57.042628 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.042646 kubelet[3990]: W0123 18:52:57.042644 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.042743 kubelet[3990]: E0123 18:52:57.042657 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.042775 kubelet[3990]: E0123 18:52:57.042761 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.042775 kubelet[3990]: W0123 18:52:57.042767 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.042818 kubelet[3990]: E0123 18:52:57.042774 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.042867 kubelet[3990]: E0123 18:52:57.042859 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.042889 kubelet[3990]: W0123 18:52:57.042868 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.042889 kubelet[3990]: E0123 18:52:57.042875 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.042992 kubelet[3990]: E0123 18:52:57.042984 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043016 kubelet[3990]: W0123 18:52:57.042993 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043016 kubelet[3990]: E0123 18:52:57.043000 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043206 kubelet[3990]: E0123 18:52:57.043198 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043232 kubelet[3990]: W0123 18:52:57.043207 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043232 kubelet[3990]: E0123 18:52:57.043214 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043308 kubelet[3990]: E0123 18:52:57.043301 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043332 kubelet[3990]: W0123 18:52:57.043309 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043332 kubelet[3990]: E0123 18:52:57.043316 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043401 kubelet[3990]: E0123 18:52:57.043394 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043425 kubelet[3990]: W0123 18:52:57.043402 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043425 kubelet[3990]: E0123 18:52:57.043408 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043743 kubelet[3990]: E0123 18:52:57.043507 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043743 kubelet[3990]: W0123 18:52:57.043513 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043743 kubelet[3990]: E0123 18:52:57.043519 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043743 kubelet[3990]: E0123 18:52:57.043613 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043743 kubelet[3990]: W0123 18:52:57.043617 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043743 kubelet[3990]: E0123 18:52:57.043623 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.043889 kubelet[3990]: E0123 18:52:57.043830 3990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:52:57.043889 kubelet[3990]: W0123 18:52:57.043836 3990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:52:57.043889 kubelet[3990]: E0123 18:52:57.043842 3990 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:52:57.218000 containerd[2555]: time="2026-01-23T18:52:57.217711259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:57.220840 containerd[2555]: time="2026-01-23T18:52:57.220701456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 18:52:57.223472 containerd[2555]: time="2026-01-23T18:52:57.223432433Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:57.227776 containerd[2555]: time="2026-01-23T18:52:57.227638354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:57.228657 containerd[2555]: time="2026-01-23T18:52:57.228629917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.375490167s" Jan 23 18:52:57.228815 containerd[2555]: time="2026-01-23T18:52:57.228737325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:52:57.231111 containerd[2555]: time="2026-01-23T18:52:57.231084522Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:52:57.261008 containerd[2555]: time="2026-01-23T18:52:57.260980568Z" level=info msg="Container 770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:57.280988 containerd[2555]: time="2026-01-23T18:52:57.280961888Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024\"" Jan 23 18:52:57.281324 containerd[2555]: time="2026-01-23T18:52:57.281289929Z" level=info msg="StartContainer for \"770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024\"" Jan 23 18:52:57.282822 containerd[2555]: time="2026-01-23T18:52:57.282750422Z" level=info msg="connecting to shim 770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024" address="unix:///run/containerd/s/8878ed9be2a73299828ff5822709fb0191c43f775986fcbef416a705f9d862fd" protocol=ttrpc version=3 Jan 23 18:52:57.303676 systemd[1]: Started cri-containerd-770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024.scope - libcontainer container 770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024. Jan 23 18:52:57.336000 audit: BPF prog-id=190 op=LOAD Jan 23 18:52:57.336000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4541 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:57.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737306533346364353066626266636664303232396565626532396565 Jan 23 18:52:57.336000 audit: BPF prog-id=191 op=LOAD Jan 23 18:52:57.336000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4541 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:57.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737306533346364353066626266636664303232396565626532396565 Jan 23 18:52:57.336000 audit: BPF prog-id=191 op=UNLOAD Jan 23 18:52:57.336000 audit[4666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:57.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737306533346364353066626266636664303232396565626532396565 Jan 23 18:52:57.336000 audit: BPF prog-id=190 op=UNLOAD Jan 23 18:52:57.336000 audit[4666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:57.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737306533346364353066626266636664303232396565626532396565 Jan 23 18:52:57.336000 audit: BPF prog-id=192 op=LOAD Jan 23 18:52:57.336000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4541 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:52:57.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737306533346364353066626266636664303232396565626532396565 Jan 23 18:52:57.356897 containerd[2555]: time="2026-01-23T18:52:57.356871007Z" level=info msg="StartContainer for \"770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024\" returns successfully" Jan 23 18:52:57.360240 systemd[1]: cri-containerd-770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024.scope: Deactivated successfully. Jan 23 18:52:57.361000 audit: BPF prog-id=192 op=UNLOAD Jan 23 18:52:57.363553 containerd[2555]: time="2026-01-23T18:52:57.363526578Z" level=info msg="received container exit event container_id:\"770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024\" id:\"770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024\" pid:4679 exited_at:{seconds:1769194377 nanos:363159170}" Jan 23 18:52:57.382291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-770e34cd50fbbfcfd0229eebe29ee9aefa69b54b764de4f79351046953154024-rootfs.mount: Deactivated successfully. Jan 23 18:52:57.961680 kubelet[3990]: I0123 18:52:57.961301 3990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:52:57.974704 kubelet[3990]: I0123 18:52:57.974637 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6494dfcd8d-7vhbz" podStartSLOduration=3.558372178 podStartE2EDuration="5.974618449s" podCreationTimestamp="2026-01-23 18:52:52 +0000 UTC" firstStartedPulling="2026-01-23 18:52:53.435210695 +0000 UTC m=+19.638846099" lastFinishedPulling="2026-01-23 18:52:55.851456964 +0000 UTC m=+22.055092370" observedRunningTime="2026-01-23 18:52:56.970025837 +0000 UTC m=+23.173661239" watchObservedRunningTime="2026-01-23 18:52:57.974618449 +0000 UTC m=+24.178253855" Jan 23 18:52:58.891843 kubelet[3990]: E0123 18:52:58.891739 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:52:59.968287 containerd[2555]: time="2026-01-23T18:52:59.968026599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:53:00.891213 kubelet[3990]: E0123 18:53:00.890956 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:02.891264 kubelet[3990]: E0123 18:53:02.891168 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:03.500922 containerd[2555]: time="2026-01-23T18:53:03.500880904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:03.503142 containerd[2555]: time="2026-01-23T18:53:03.503043874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 18:53:03.505540 containerd[2555]: time="2026-01-23T18:53:03.505507646Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:03.509108 containerd[2555]: time="2026-01-23T18:53:03.508947355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:03.509799 containerd[2555]: time="2026-01-23T18:53:03.509414224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.541346094s" Jan 23 18:53:03.509799 containerd[2555]: time="2026-01-23T18:53:03.509441073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:53:03.511366 containerd[2555]: time="2026-01-23T18:53:03.511343351Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:53:03.529679 containerd[2555]: time="2026-01-23T18:53:03.529651773Z" level=info msg="Container ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:03.546305 containerd[2555]: time="2026-01-23T18:53:03.546277418Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf\"" Jan 23 18:53:03.546746 containerd[2555]: time="2026-01-23T18:53:03.546698444Z" level=info msg="StartContainer for \"ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf\"" Jan 23 18:53:03.548305 containerd[2555]: time="2026-01-23T18:53:03.548278898Z" level=info msg="connecting to shim ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf" address="unix:///run/containerd/s/8878ed9be2a73299828ff5822709fb0191c43f775986fcbef416a705f9d862fd" protocol=ttrpc version=3 Jan 23 18:53:03.570655 systemd[1]: Started cri-containerd-ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf.scope - libcontainer container ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf. Jan 23 18:53:03.618333 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 23 18:53:03.618412 kernel: audit: type=1334 audit(1769194383.615:590): prog-id=193 op=LOAD Jan 23 18:53:03.615000 audit: BPF prog-id=193 op=LOAD Jan 23 18:53:03.615000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.625530 kernel: audit: type=1300 audit(1769194383.615:590): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.632511 kernel: audit: type=1327 audit(1769194383.615:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.635141 kernel: audit: type=1334 audit(1769194383.615:591): prog-id=194 op=LOAD Jan 23 18:53:03.615000 audit: BPF prog-id=194 op=LOAD Jan 23 18:53:03.615000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.645221 kernel: audit: type=1300 audit(1769194383.615:591): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.645288 kernel: audit: type=1327 audit(1769194383.615:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.615000 audit: BPF prog-id=194 op=UNLOAD Jan 23 18:53:03.652137 kernel: audit: type=1334 audit(1769194383.615:592): prog-id=194 op=UNLOAD Jan 23 18:53:03.652195 kernel: audit: type=1300 audit(1769194383.615:592): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.615000 audit[4726]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.657493 kernel: audit: type=1327 audit(1769194383.615:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.615000 audit: BPF prog-id=193 op=UNLOAD Jan 23 18:53:03.659513 kernel: audit: type=1334 audit(1769194383.615:593): prog-id=193 op=UNLOAD Jan 23 18:53:03.615000 audit[4726]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.616000 audit: BPF prog-id=195 op=LOAD Jan 23 18:53:03.616000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4541 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:03.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666313138663664623033613639303633316262333963613430303136 Jan 23 18:53:03.668238 containerd[2555]: time="2026-01-23T18:53:03.668208915Z" level=info msg="StartContainer for \"ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf\" returns successfully" Jan 23 18:53:04.891213 kubelet[3990]: E0123 18:53:04.891129 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:04.925773 systemd[1]: cri-containerd-ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf.scope: Deactivated successfully. Jan 23 18:53:04.926842 systemd[1]: cri-containerd-ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf.scope: Consumed 415ms CPU time, 192.9M memory peak, 171.3M written to disk. Jan 23 18:53:04.927000 audit: BPF prog-id=195 op=UNLOAD Jan 23 18:53:04.928793 containerd[2555]: time="2026-01-23T18:53:04.928754018Z" level=info msg="received container exit event container_id:\"ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf\" id:\"ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf\" pid:4738 exited_at:{seconds:1769194384 nanos:927179797}" Jan 23 18:53:04.947159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff118f6db03a690631bb39ca400163ab4db6a0730b857be65f93856c1ea98fbf-rootfs.mount: Deactivated successfully. Jan 23 18:53:04.990895 kubelet[3990]: I0123 18:53:04.990873 3990 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:53:05.029176 systemd[1]: Created slice kubepods-burstable-podb95b576c_0021_4070_9f4b_cf851ec9d8b5.slice - libcontainer container kubepods-burstable-podb95b576c_0021_4070_9f4b_cf851ec9d8b5.slice. Jan 23 18:53:05.044580 systemd[1]: Created slice kubepods-besteffort-pod0fe1ccdb_f11d_478d_b8c5_50e7a678ae44.slice - libcontainer container kubepods-besteffort-pod0fe1ccdb_f11d_478d_b8c5_50e7a678ae44.slice. Jan 23 18:53:05.050715 systemd[1]: Created slice kubepods-besteffort-podc2f1acaa_9237_4a56_b34a_eb28ae8b7529.slice - libcontainer container kubepods-besteffort-podc2f1acaa_9237_4a56_b34a_eb28ae8b7529.slice. Jan 23 18:53:05.064440 systemd[1]: Created slice kubepods-burstable-pod065ad94c_6bc1_4cb8_8e5f_8e21ce855f36.slice - libcontainer container kubepods-burstable-pod065ad94c_6bc1_4cb8_8e5f_8e21ce855f36.slice. Jan 23 18:53:05.070571 systemd[1]: Created slice kubepods-besteffort-podc7f09343_3d0b_4264_987b_68763f2830ab.slice - libcontainer container kubepods-besteffort-podc7f09343_3d0b_4264_987b_68763f2830ab.slice. Jan 23 18:53:05.077695 systemd[1]: Created slice kubepods-besteffort-pod0de0753d_529b_4481_b287_d3c7f2b0a7a6.slice - libcontainer container kubepods-besteffort-pod0de0753d_529b_4481_b287_d3c7f2b0a7a6.slice. Jan 23 18:53:05.084298 systemd[1]: Created slice kubepods-besteffort-pod12936b13_6ad9_4c1b_a913_2f3039ac097a.slice - libcontainer container kubepods-besteffort-pod12936b13_6ad9_4c1b_a913_2f3039ac097a.slice. Jan 23 18:53:05.088175 kubelet[3990]: I0123 18:53:05.088153 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2f1acaa-9237-4a56-b34a-eb28ae8b7529-calico-apiserver-certs\") pod \"calico-apiserver-8686dc9b89-kzk6x\" (UID: \"c2f1acaa-9237-4a56-b34a-eb28ae8b7529\") " pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" Jan 23 18:53:05.088382 kubelet[3990]: I0123 18:53:05.088361 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/065ad94c-6bc1-4cb8-8e5f-8e21ce855f36-config-volume\") pod \"coredns-668d6bf9bc-2zwxx\" (UID: \"065ad94c-6bc1-4cb8-8e5f-8e21ce855f36\") " pod="kube-system/coredns-668d6bf9bc-2zwxx" Jan 23 18:53:05.088467 kubelet[3990]: I0123 18:53:05.088457 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-backend-key-pair\") pod \"whisker-67798f8fcd-j9hnz\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " pod="calico-system/whisker-67798f8fcd-j9hnz" Jan 23 18:53:05.088581 kubelet[3990]: I0123 18:53:05.088554 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12936b13-6ad9-4c1b-a913-2f3039ac097a-config\") pod \"goldmane-666569f655-22mgp\" (UID: \"12936b13-6ad9-4c1b-a913-2f3039ac097a\") " pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:05.088690 kubelet[3990]: I0123 18:53:05.088669 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5tn\" (UniqueName: \"kubernetes.io/projected/b95b576c-0021-4070-9f4b-cf851ec9d8b5-kube-api-access-xg5tn\") pod \"coredns-668d6bf9bc-57fbd\" (UID: \"b95b576c-0021-4070-9f4b-cf851ec9d8b5\") " pod="kube-system/coredns-668d6bf9bc-57fbd" Jan 23 18:53:05.088773 kubelet[3990]: I0123 18:53:05.088751 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq6j\" (UniqueName: \"kubernetes.io/projected/c7f09343-3d0b-4264-987b-68763f2830ab-kube-api-access-5dq6j\") pod \"calico-apiserver-8686dc9b89-f4rb7\" (UID: \"c7f09343-3d0b-4264-987b-68763f2830ab\") " pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" Jan 23 18:53:05.088999 kubelet[3990]: I0123 18:53:05.088825 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9l65\" (UniqueName: \"kubernetes.io/projected/0de0753d-529b-4481-b287-d3c7f2b0a7a6-kube-api-access-f9l65\") pod \"whisker-67798f8fcd-j9hnz\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " pod="calico-system/whisker-67798f8fcd-j9hnz" Jan 23 18:53:05.088999 kubelet[3990]: I0123 18:53:05.088864 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12936b13-6ad9-4c1b-a913-2f3039ac097a-goldmane-ca-bundle\") pod \"goldmane-666569f655-22mgp\" (UID: \"12936b13-6ad9-4c1b-a913-2f3039ac097a\") " pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:05.088999 kubelet[3990]: I0123 18:53:05.088890 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b95b576c-0021-4070-9f4b-cf851ec9d8b5-config-volume\") pod \"coredns-668d6bf9bc-57fbd\" (UID: \"b95b576c-0021-4070-9f4b-cf851ec9d8b5\") " pod="kube-system/coredns-668d6bf9bc-57fbd" Jan 23 18:53:05.089179 kubelet[3990]: I0123 18:53:05.089150 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9br\" (UniqueName: \"kubernetes.io/projected/c2f1acaa-9237-4a56-b34a-eb28ae8b7529-kube-api-access-6x9br\") pod \"calico-apiserver-8686dc9b89-kzk6x\" (UID: \"c2f1acaa-9237-4a56-b34a-eb28ae8b7529\") " pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" Jan 23 18:53:05.089262 kubelet[3990]: I0123 18:53:05.089241 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fe1ccdb-f11d-478d-b8c5-50e7a678ae44-tigera-ca-bundle\") pod \"calico-kube-controllers-6658c89489-trg8d\" (UID: \"0fe1ccdb-f11d-478d-b8c5-50e7a678ae44\") " pod="calico-system/calico-kube-controllers-6658c89489-trg8d" Jan 23 18:53:05.090567 kubelet[3990]: I0123 18:53:05.089352 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-ca-bundle\") pod \"whisker-67798f8fcd-j9hnz\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " pod="calico-system/whisker-67798f8fcd-j9hnz" Jan 23 18:53:05.090567 kubelet[3990]: I0123 18:53:05.089383 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/12936b13-6ad9-4c1b-a913-2f3039ac097a-goldmane-key-pair\") pod \"goldmane-666569f655-22mgp\" (UID: \"12936b13-6ad9-4c1b-a913-2f3039ac097a\") " pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:05.090567 kubelet[3990]: I0123 18:53:05.089416 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndlkb\" (UniqueName: \"kubernetes.io/projected/0fe1ccdb-f11d-478d-b8c5-50e7a678ae44-kube-api-access-ndlkb\") pod \"calico-kube-controllers-6658c89489-trg8d\" (UID: \"0fe1ccdb-f11d-478d-b8c5-50e7a678ae44\") " pod="calico-system/calico-kube-controllers-6658c89489-trg8d" Jan 23 18:53:05.090567 kubelet[3990]: I0123 18:53:05.089438 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7f09343-3d0b-4264-987b-68763f2830ab-calico-apiserver-certs\") pod \"calico-apiserver-8686dc9b89-f4rb7\" (UID: \"c7f09343-3d0b-4264-987b-68763f2830ab\") " pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" Jan 23 18:53:05.090567 kubelet[3990]: I0123 18:53:05.089463 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w6fl\" (UniqueName: \"kubernetes.io/projected/065ad94c-6bc1-4cb8-8e5f-8e21ce855f36-kube-api-access-6w6fl\") pod \"coredns-668d6bf9bc-2zwxx\" (UID: \"065ad94c-6bc1-4cb8-8e5f-8e21ce855f36\") " pod="kube-system/coredns-668d6bf9bc-2zwxx" Jan 23 18:53:05.090749 kubelet[3990]: I0123 18:53:05.089496 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82cb4\" (UniqueName: \"kubernetes.io/projected/12936b13-6ad9-4c1b-a913-2f3039ac097a-kube-api-access-82cb4\") pod \"goldmane-666569f655-22mgp\" (UID: \"12936b13-6ad9-4c1b-a913-2f3039ac097a\") " pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:05.405682 containerd[2555]: time="2026-01-23T18:53:05.405634568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57fbd,Uid:b95b576c-0021-4070-9f4b-cf851ec9d8b5,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:05.406689 containerd[2555]: time="2026-01-23T18:53:05.406314046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2zwxx,Uid:065ad94c-6bc1-4cb8-8e5f-8e21ce855f36,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:05.406689 containerd[2555]: time="2026-01-23T18:53:05.406345226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22mgp,Uid:12936b13-6ad9-4c1b-a913-2f3039ac097a,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:05.406689 containerd[2555]: time="2026-01-23T18:53:05.406373610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67798f8fcd-j9hnz,Uid:0de0753d-529b-4481-b287-d3c7f2b0a7a6,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:05.406882 containerd[2555]: time="2026-01-23T18:53:05.406401452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658c89489-trg8d,Uid:0fe1ccdb-f11d-478d-b8c5-50e7a678ae44,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:05.407132 containerd[2555]: time="2026-01-23T18:53:05.407119015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-kzk6x,Uid:c2f1acaa-9237-4a56-b34a-eb28ae8b7529,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:53:05.407312 containerd[2555]: time="2026-01-23T18:53:05.407288808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-f4rb7,Uid:c7f09343-3d0b-4264-987b-68763f2830ab,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:53:05.993727 containerd[2555]: time="2026-01-23T18:53:05.993557974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:53:06.058244 containerd[2555]: time="2026-01-23T18:53:06.058192273Z" level=error msg="Failed to destroy network for sandbox \"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.061286 systemd[1]: run-netns-cni\x2df1b50cc9\x2dcd08\x2db177\x2d2ef0\x2d2917481975dc.mount: Deactivated successfully. Jan 23 18:53:06.069928 containerd[2555]: time="2026-01-23T18:53:06.069831202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-kzk6x,Uid:c2f1acaa-9237-4a56-b34a-eb28ae8b7529,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.070083 kubelet[3990]: E0123 18:53:06.070011 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.070770 kubelet[3990]: E0123 18:53:06.070082 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" Jan 23 18:53:06.070770 kubelet[3990]: E0123 18:53:06.070105 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" Jan 23 18:53:06.070770 kubelet[3990]: E0123 18:53:06.070145 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3747a5995ffde6cdbc50b24c45019c7404aec42e08e583dde718eaa2676777fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:06.105507 containerd[2555]: time="2026-01-23T18:53:06.103870849Z" level=error msg="Failed to destroy network for sandbox \"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.106769 systemd[1]: run-netns-cni\x2d4e519ea7\x2d0a8b\x2dadcf\x2db2b9\x2d09b5de757e23.mount: Deactivated successfully. Jan 23 18:53:06.115534 containerd[2555]: time="2026-01-23T18:53:06.115503939Z" level=error msg="Failed to destroy network for sandbox \"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.118128 systemd[1]: run-netns-cni\x2dfde58325\x2dd838\x2d60c8\x2d30f5\x2d5165bbfe50e3.mount: Deactivated successfully. Jan 23 18:53:06.119464 containerd[2555]: time="2026-01-23T18:53:06.118473759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22mgp,Uid:12936b13-6ad9-4c1b-a913-2f3039ac097a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.120085 kubelet[3990]: E0123 18:53:06.120043 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.120166 kubelet[3990]: E0123 18:53:06.120106 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:06.120166 kubelet[3990]: E0123 18:53:06.120130 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-22mgp" Jan 23 18:53:06.120320 kubelet[3990]: E0123 18:53:06.120188 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3285476cb3001fc34a7d8a2a66953fbbcc58a3d4b2d54adc664c8c23f31ccbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:06.125410 containerd[2555]: time="2026-01-23T18:53:06.125305241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57fbd,Uid:b95b576c-0021-4070-9f4b-cf851ec9d8b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.126507 kubelet[3990]: E0123 18:53:06.126420 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.126858 kubelet[3990]: E0123 18:53:06.126462 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-57fbd" Jan 23 18:53:06.127015 kubelet[3990]: E0123 18:53:06.126851 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-57fbd" Jan 23 18:53:06.127015 kubelet[3990]: E0123 18:53:06.126905 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-57fbd_kube-system(b95b576c-0021-4070-9f4b-cf851ec9d8b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-57fbd_kube-system(b95b576c-0021-4070-9f4b-cf851ec9d8b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14efbdbaa4a50d78fe7bc2801319fe41be1a57c965e5589c746ae756cfed4ec2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-57fbd" podUID="b95b576c-0021-4070-9f4b-cf851ec9d8b5" Jan 23 18:53:06.127845 containerd[2555]: time="2026-01-23T18:53:06.127516859Z" level=error msg="Failed to destroy network for sandbox \"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.129975 systemd[1]: run-netns-cni\x2d0bb10826\x2d1ebb\x2d94ea\x2d1515\x2d6fd396207d5c.mount: Deactivated successfully. Jan 23 18:53:06.132000 containerd[2555]: time="2026-01-23T18:53:06.131904579Z" level=error msg="Failed to destroy network for sandbox \"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.137407 containerd[2555]: time="2026-01-23T18:53:06.137378256Z" level=error msg="Failed to destroy network for sandbox \"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.140060 containerd[2555]: time="2026-01-23T18:53:06.140029126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2zwxx,Uid:065ad94c-6bc1-4cb8-8e5f-8e21ce855f36,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.140378 kubelet[3990]: E0123 18:53:06.140346 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.140433 kubelet[3990]: E0123 18:53:06.140385 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2zwxx" Jan 23 18:53:06.140433 kubelet[3990]: E0123 18:53:06.140404 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2zwxx" Jan 23 18:53:06.140509 kubelet[3990]: E0123 18:53:06.140445 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2zwxx_kube-system(065ad94c-6bc1-4cb8-8e5f-8e21ce855f36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2zwxx_kube-system(065ad94c-6bc1-4cb8-8e5f-8e21ce855f36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b575a1a5cb131fbb8ed5facc729783d83f30d0872d6dbbb2e06cc1082e6d6b60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2zwxx" podUID="065ad94c-6bc1-4cb8-8e5f-8e21ce855f36" Jan 23 18:53:06.141262 containerd[2555]: time="2026-01-23T18:53:06.141211962Z" level=error msg="Failed to destroy network for sandbox \"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.145060 containerd[2555]: time="2026-01-23T18:53:06.145027643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67798f8fcd-j9hnz,Uid:0de0753d-529b-4481-b287-d3c7f2b0a7a6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.145243 kubelet[3990]: E0123 18:53:06.145221 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.145291 kubelet[3990]: E0123 18:53:06.145262 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67798f8fcd-j9hnz" Jan 23 18:53:06.145291 kubelet[3990]: E0123 18:53:06.145280 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67798f8fcd-j9hnz" Jan 23 18:53:06.145349 kubelet[3990]: E0123 18:53:06.145313 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67798f8fcd-j9hnz_calico-system(0de0753d-529b-4481-b287-d3c7f2b0a7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67798f8fcd-j9hnz_calico-system(0de0753d-529b-4481-b287-d3c7f2b0a7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42673659196f13082afcbfb0513552b53e0324de24f93400fbc401fb5a794128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67798f8fcd-j9hnz" podUID="0de0753d-529b-4481-b287-d3c7f2b0a7a6" Jan 23 18:53:06.150182 containerd[2555]: time="2026-01-23T18:53:06.150134853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658c89489-trg8d,Uid:0fe1ccdb-f11d-478d-b8c5-50e7a678ae44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.150350 kubelet[3990]: E0123 18:53:06.150328 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.150395 kubelet[3990]: E0123 18:53:06.150372 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" Jan 23 18:53:06.150427 kubelet[3990]: E0123 18:53:06.150393 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" Jan 23 18:53:06.150451 kubelet[3990]: E0123 18:53:06.150429 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6b6a242141c5e04074fe60b2ed6032b6841c97513bfa5a1ffd24c79a8742b83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:06.152463 containerd[2555]: time="2026-01-23T18:53:06.152423412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-f4rb7,Uid:c7f09343-3d0b-4264-987b-68763f2830ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.152614 kubelet[3990]: E0123 18:53:06.152579 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.152675 kubelet[3990]: E0123 18:53:06.152611 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" Jan 23 18:53:06.152675 kubelet[3990]: E0123 18:53:06.152628 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" Jan 23 18:53:06.152746 kubelet[3990]: E0123 18:53:06.152669 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f6df254bf2281c752d4e67fc6dc75127bd053b7c2580b7cfc8aea579fe0e0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:06.896148 systemd[1]: Created slice kubepods-besteffort-podad1b7350_c4c8_43d5_adb7_51075adcd4fd.slice - libcontainer container kubepods-besteffort-podad1b7350_c4c8_43d5_adb7_51075adcd4fd.slice. Jan 23 18:53:06.898318 containerd[2555]: time="2026-01-23T18:53:06.898281643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slbmv,Uid:ad1b7350-c4c8-43d5-adb7-51075adcd4fd,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:06.948249 systemd[1]: run-netns-cni\x2d90d53c37\x2df4a4\x2dec52\x2d8785\x2d16020bfce5f2.mount: Deactivated successfully. Jan 23 18:53:06.948806 systemd[1]: run-netns-cni\x2d678cd57d\x2d32ba\x2dc8a3\x2dc4e8\x2d9dfe09787814.mount: Deactivated successfully. Jan 23 18:53:06.948862 systemd[1]: run-netns-cni\x2dafafdbcf\x2dec31\x2df64c\x2dab45\x2dff8d282d6393.mount: Deactivated successfully. Jan 23 18:53:06.952516 containerd[2555]: time="2026-01-23T18:53:06.952459342Z" level=error msg="Failed to destroy network for sandbox \"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.954518 systemd[1]: run-netns-cni\x2d30a2734a\x2dd705\x2d4954\x2d34fc\x2da5097644e808.mount: Deactivated successfully. Jan 23 18:53:06.957932 containerd[2555]: time="2026-01-23T18:53:06.957896009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slbmv,Uid:ad1b7350-c4c8-43d5-adb7-51075adcd4fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.958142 kubelet[3990]: E0123 18:53:06.958087 3990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:53:06.958142 kubelet[3990]: E0123 18:53:06.958134 3990 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slbmv" Jan 23 18:53:06.958221 kubelet[3990]: E0123 18:53:06.958157 3990 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-slbmv" Jan 23 18:53:06.958221 kubelet[3990]: E0123 18:53:06.958198 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14d4137bba76e4a7cf2b8c020e9ccb1a4d9ae3f04781fce9399d158b07f398d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:12.821945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783435538.mount: Deactivated successfully. Jan 23 18:53:12.845522 containerd[2555]: time="2026-01-23T18:53:12.845460929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:12.847675 containerd[2555]: time="2026-01-23T18:53:12.847581448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 23 18:53:12.849970 containerd[2555]: time="2026-01-23T18:53:12.849943918Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:12.853120 containerd[2555]: time="2026-01-23T18:53:12.853086869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:12.853781 containerd[2555]: time="2026-01-23T18:53:12.853652776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.859879763s" Jan 23 18:53:12.853781 containerd[2555]: time="2026-01-23T18:53:12.853682664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:53:12.863938 containerd[2555]: time="2026-01-23T18:53:12.863912657Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:53:12.884259 containerd[2555]: time="2026-01-23T18:53:12.884203570Z" level=info msg="Container 1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:12.899619 containerd[2555]: time="2026-01-23T18:53:12.899594152Z" level=info msg="CreateContainer within sandbox \"dd23c15429477c659a869bd892c5b37b1de72e59d07852f3756f86bc07eb126d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3\"" Jan 23 18:53:12.900092 containerd[2555]: time="2026-01-23T18:53:12.900061859Z" level=info msg="StartContainer for \"1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3\"" Jan 23 18:53:12.901550 containerd[2555]: time="2026-01-23T18:53:12.901521186Z" level=info msg="connecting to shim 1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3" address="unix:///run/containerd/s/8878ed9be2a73299828ff5822709fb0191c43f775986fcbef416a705f9d862fd" protocol=ttrpc version=3 Jan 23 18:53:12.920646 systemd[1]: Started cri-containerd-1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3.scope - libcontainer container 1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3. Jan 23 18:53:12.971000 audit: BPF prog-id=196 op=LOAD Jan 23 18:53:12.973053 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 18:53:12.973109 kernel: audit: type=1334 audit(1769194392.971:596): prog-id=196 op=LOAD Jan 23 18:53:12.971000 audit[5000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.977766 kernel: audit: type=1300 audit(1769194392.971:596): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.982519 kernel: audit: type=1327 audit(1769194392.971:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:12.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:12.971000 audit: BPF prog-id=197 op=LOAD Jan 23 18:53:12.985492 kernel: audit: type=1334 audit(1769194392.971:597): prog-id=197 op=LOAD Jan 23 18:53:12.985542 kernel: audit: type=1300 audit(1769194392.971:597): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.971000 audit[5000]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:13.000288 kernel: audit: type=1327 audit(1769194392.971:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:12.971000 audit: BPF prog-id=197 op=UNLOAD Jan 23 18:53:13.004020 kernel: audit: type=1334 audit(1769194392.971:598): prog-id=197 op=UNLOAD Jan 23 18:53:13.009844 kernel: audit: type=1300 audit(1769194392.971:598): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.971000 audit[5000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:13.014526 kernel: audit: type=1327 audit(1769194392.971:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:13.016397 kernel: audit: type=1334 audit(1769194392.971:599): prog-id=196 op=UNLOAD Jan 23 18:53:12.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:12.971000 audit: BPF prog-id=196 op=UNLOAD Jan 23 18:53:12.971000 audit[5000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:12.971000 audit: BPF prog-id=198 op=LOAD Jan 23 18:53:12.971000 audit[5000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4541 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:12.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656261356364616261313738376338636231353066626335363434 Jan 23 18:53:13.020397 containerd[2555]: time="2026-01-23T18:53:13.020370144Z" level=info msg="StartContainer for \"1deba5cdaba1787c8cb150fbc5644166bb33d983891a8ca6a3dab535fa308bc3\" returns successfully" Jan 23 18:53:13.484602 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:53:13.484717 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:53:13.631578 kubelet[3990]: I0123 18:53:13.631018 3990 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-backend-key-pair\") pod \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " Jan 23 18:53:13.631578 kubelet[3990]: I0123 18:53:13.631060 3990 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9l65\" (UniqueName: \"kubernetes.io/projected/0de0753d-529b-4481-b287-d3c7f2b0a7a6-kube-api-access-f9l65\") pod \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " Jan 23 18:53:13.631578 kubelet[3990]: I0123 18:53:13.631095 3990 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-ca-bundle\") pod \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\" (UID: \"0de0753d-529b-4481-b287-d3c7f2b0a7a6\") " Jan 23 18:53:13.633082 kubelet[3990]: I0123 18:53:13.632863 3990 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0de0753d-529b-4481-b287-d3c7f2b0a7a6" (UID: "0de0753d-529b-4481-b287-d3c7f2b0a7a6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:53:13.636382 kubelet[3990]: I0123 18:53:13.636333 3990 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0de0753d-529b-4481-b287-d3c7f2b0a7a6" (UID: "0de0753d-529b-4481-b287-d3c7f2b0a7a6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:53:13.637645 kubelet[3990]: I0123 18:53:13.637602 3990 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de0753d-529b-4481-b287-d3c7f2b0a7a6-kube-api-access-f9l65" (OuterVolumeSpecName: "kube-api-access-f9l65") pod "0de0753d-529b-4481-b287-d3c7f2b0a7a6" (UID: "0de0753d-529b-4481-b287-d3c7f2b0a7a6"). InnerVolumeSpecName "kube-api-access-f9l65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:53:13.732100 kubelet[3990]: I0123 18:53:13.732076 3990 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-ca-bundle\") on node \"ci-4547.1.0-a-90f1f3b2aa\" DevicePath \"\"" Jan 23 18:53:13.732100 kubelet[3990]: I0123 18:53:13.732103 3990 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9l65\" (UniqueName: \"kubernetes.io/projected/0de0753d-529b-4481-b287-d3c7f2b0a7a6-kube-api-access-f9l65\") on node \"ci-4547.1.0-a-90f1f3b2aa\" DevicePath \"\"" Jan 23 18:53:13.732224 kubelet[3990]: I0123 18:53:13.732113 3990 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0de0753d-529b-4481-b287-d3c7f2b0a7a6-whisker-backend-key-pair\") on node \"ci-4547.1.0-a-90f1f3b2aa\" DevicePath \"\"" Jan 23 18:53:13.821628 systemd[1]: var-lib-kubelet-pods-0de0753d\x2d529b\x2d4481\x2db287\x2dd3c7f2b0a7a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9l65.mount: Deactivated successfully. Jan 23 18:53:13.821728 systemd[1]: var-lib-kubelet-pods-0de0753d\x2d529b\x2d4481\x2db287\x2dd3c7f2b0a7a6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:53:13.896584 systemd[1]: Removed slice kubepods-besteffort-pod0de0753d_529b_4481_b287_d3c7f2b0a7a6.slice - libcontainer container kubepods-besteffort-pod0de0753d_529b_4481_b287_d3c7f2b0a7a6.slice. Jan 23 18:53:14.025902 kubelet[3990]: I0123 18:53:14.025709 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jrz5r" podStartSLOduration=1.801364674 podStartE2EDuration="21.025691714s" podCreationTimestamp="2026-01-23 18:52:53 +0000 UTC" firstStartedPulling="2026-01-23 18:52:53.63022578 +0000 UTC m=+19.833861184" lastFinishedPulling="2026-01-23 18:53:12.854552822 +0000 UTC m=+39.058188224" observedRunningTime="2026-01-23 18:53:14.024195661 +0000 UTC m=+40.227831066" watchObservedRunningTime="2026-01-23 18:53:14.025691714 +0000 UTC m=+40.229327119" Jan 23 18:53:14.104640 systemd[1]: Created slice kubepods-besteffort-pod7886516f_3341_4184_8abc_3d16d954f0c6.slice - libcontainer container kubepods-besteffort-pod7886516f_3341_4184_8abc_3d16d954f0c6.slice. Jan 23 18:53:14.134798 kubelet[3990]: I0123 18:53:14.134764 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwc6j\" (UniqueName: \"kubernetes.io/projected/7886516f-3341-4184-8abc-3d16d954f0c6-kube-api-access-cwc6j\") pod \"whisker-774b9649d4-hsh9h\" (UID: \"7886516f-3341-4184-8abc-3d16d954f0c6\") " pod="calico-system/whisker-774b9649d4-hsh9h" Jan 23 18:53:14.134894 kubelet[3990]: I0123 18:53:14.134811 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7886516f-3341-4184-8abc-3d16d954f0c6-whisker-backend-key-pair\") pod \"whisker-774b9649d4-hsh9h\" (UID: \"7886516f-3341-4184-8abc-3d16d954f0c6\") " pod="calico-system/whisker-774b9649d4-hsh9h" Jan 23 18:53:14.134894 kubelet[3990]: I0123 18:53:14.134842 3990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7886516f-3341-4184-8abc-3d16d954f0c6-whisker-ca-bundle\") pod \"whisker-774b9649d4-hsh9h\" (UID: \"7886516f-3341-4184-8abc-3d16d954f0c6\") " pod="calico-system/whisker-774b9649d4-hsh9h" Jan 23 18:53:14.409196 containerd[2555]: time="2026-01-23T18:53:14.409108615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774b9649d4-hsh9h,Uid:7886516f-3341-4184-8abc-3d16d954f0c6,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:14.522198 systemd-networkd[2150]: calicec9dd04494: Link UP Jan 23 18:53:14.522826 systemd-networkd[2150]: calicec9dd04494: Gained carrier Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.435 [INFO][5068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.443 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0 whisker-774b9649d4- calico-system 7886516f-3341-4184-8abc-3d16d954f0c6 865 0 2026-01-23 18:53:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:774b9649d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa whisker-774b9649d4-hsh9h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicec9dd04494 [] [] }} ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.443 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.463 [INFO][5080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" HandleID="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.463 [INFO][5080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" HandleID="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"whisker-774b9649d4-hsh9h", "timestamp":"2026-01-23 18:53:14.463488806 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.463 [INFO][5080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.463 [INFO][5080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.463 [INFO][5080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.467 [INFO][5080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.470 [INFO][5080] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.473 [INFO][5080] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.476 [INFO][5080] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.477 [INFO][5080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.477 [INFO][5080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.478 [INFO][5080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330 Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.482 [INFO][5080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.489 [INFO][5080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.489 [INFO][5080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.489 [INFO][5080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:14.539384 containerd[2555]: 2026-01-23 18:53:14.490 [INFO][5080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" HandleID="k8s-pod-network.fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.492 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0", GenerateName:"whisker-774b9649d4-", Namespace:"calico-system", SelfLink:"", UID:"7886516f-3341-4184-8abc-3d16d954f0c6", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774b9649d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"whisker-774b9649d4-hsh9h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicec9dd04494", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.492 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.493 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicec9dd04494 ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.523 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.523 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0", GenerateName:"whisker-774b9649d4-", Namespace:"calico-system", SelfLink:"", UID:"7886516f-3341-4184-8abc-3d16d954f0c6", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774b9649d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330", Pod:"whisker-774b9649d4-hsh9h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicec9dd04494", MAC:"92:ef:a3:bc:d2:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:14.539962 containerd[2555]: 2026-01-23 18:53:14.536 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" Namespace="calico-system" Pod="whisker-774b9649d4-hsh9h" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-whisker--774b9649d4--hsh9h-eth0" Jan 23 18:53:14.573828 containerd[2555]: time="2026-01-23T18:53:14.573795568Z" level=info msg="connecting to shim fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330" address="unix:///run/containerd/s/43c346a7f87e5dadb27ea0234b81f62a11ca52611218c065723770b578ea6817" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:14.594660 systemd[1]: Started cri-containerd-fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330.scope - libcontainer container fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330. Jan 23 18:53:14.602000 audit: BPF prog-id=199 op=LOAD Jan 23 18:53:14.603000 audit: BPF prog-id=200 op=LOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=200 op=UNLOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=201 op=LOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=202 op=LOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=202 op=UNLOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=201 op=UNLOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.603000 audit: BPF prog-id=203 op=LOAD Jan 23 18:53:14.603000 audit[5113]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5102 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662626631356338373134353631386435373539366530383930656364 Jan 23 18:53:14.633128 containerd[2555]: time="2026-01-23T18:53:14.633097004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774b9649d4-hsh9h,Uid:7886516f-3341-4184-8abc-3d16d954f0c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbbf15c87145618d57596e0890ecdd2b421724f5d65052afc6e062fb43ba3330\"" Jan 23 18:53:14.634539 containerd[2555]: time="2026-01-23T18:53:14.634465499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:53:14.886462 containerd[2555]: time="2026-01-23T18:53:14.886419937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:14.889208 containerd[2555]: time="2026-01-23T18:53:14.889149925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:53:14.889208 containerd[2555]: time="2026-01-23T18:53:14.889186430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:14.889568 kubelet[3990]: E0123 18:53:14.889524 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:14.889824 kubelet[3990]: E0123 18:53:14.889580 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:14.889852 kubelet[3990]: E0123 18:53:14.889723 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:81b9f55a93b74810ac86061c7b4e22d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:14.892804 containerd[2555]: time="2026-01-23T18:53:14.892780937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:53:15.149258 containerd[2555]: time="2026-01-23T18:53:15.149159418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:15.151586 containerd[2555]: time="2026-01-23T18:53:15.151556907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:53:15.151733 containerd[2555]: time="2026-01-23T18:53:15.151622166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:15.151794 kubelet[3990]: E0123 18:53:15.151747 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:15.151840 kubelet[3990]: E0123 18:53:15.151803 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:15.151950 kubelet[3990]: E0123 18:53:15.151917 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:15.153722 kubelet[3990]: E0123 18:53:15.153687 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:53:15.893453 kubelet[3990]: I0123 18:53:15.893416 3990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de0753d-529b-4481-b287-d3c7f2b0a7a6" path="/var/lib/kubelet/pods/0de0753d-529b-4481-b287-d3c7f2b0a7a6/volumes" Jan 23 18:53:16.019729 kubelet[3990]: E0123 18:53:16.019688 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:53:16.039609 systemd-networkd[2150]: calicec9dd04494: Gained IPv6LL Jan 23 18:53:16.049000 audit[5243]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:16.049000 audit[5243]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe05917f80 a2=0 a3=7ffe05917f6c items=0 ppid=4095 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:16.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:16.051000 audit[5243]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:16.051000 audit[5243]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe05917f80 a2=0 a3=0 items=0 ppid=4095 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:16.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:17.892985 containerd[2555]: time="2026-01-23T18:53:17.892551354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slbmv,Uid:ad1b7350-c4c8-43d5-adb7-51075adcd4fd,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:17.892985 containerd[2555]: time="2026-01-23T18:53:17.892554367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-f4rb7,Uid:c7f09343-3d0b-4264-987b-68763f2830ab,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:53:18.002714 systemd-networkd[2150]: cali3468819bbfd: Link UP Jan 23 18:53:18.002877 systemd-networkd[2150]: cali3468819bbfd: Gained carrier Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.939 [INFO][5280] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.948 [INFO][5280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0 calico-apiserver-8686dc9b89- calico-apiserver c7f09343-3d0b-4264-987b-68763f2830ab 799 0 2026-01-23 18:52:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8686dc9b89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa calico-apiserver-8686dc9b89-f4rb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3468819bbfd [] [] }} ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.948 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.973 [INFO][5299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" HandleID="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.973 [INFO][5299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" HandleID="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"calico-apiserver-8686dc9b89-f4rb7", "timestamp":"2026-01-23 18:53:17.973025242 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.973 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.973 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.973 [INFO][5299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.978 [INFO][5299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.981 [INFO][5299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.983 [INFO][5299] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.985 [INFO][5299] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.986 [INFO][5299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.986 [INFO][5299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.987 [INFO][5299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88 Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.991 [INFO][5299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.997 [INFO][5299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.997 [INFO][5299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.021931 containerd[2555]: 2026-01-23 18:53:17.997 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:18.022310 containerd[2555]: 2026-01-23 18:53:17.997 [INFO][5299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" HandleID="k8s-pod-network.5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.022310 containerd[2555]: 2026-01-23 18:53:17.999 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0", GenerateName:"calico-apiserver-8686dc9b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f09343-3d0b-4264-987b-68763f2830ab", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8686dc9b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"calico-apiserver-8686dc9b89-f4rb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3468819bbfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:18.022310 containerd[2555]: 2026-01-23 18:53:17.999 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.022310 containerd[2555]: 2026-01-23 18:53:17.999 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3468819bbfd ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.022310 containerd[2555]: 2026-01-23 18:53:18.004 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.022407 containerd[2555]: 2026-01-23 18:53:18.005 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0", GenerateName:"calico-apiserver-8686dc9b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f09343-3d0b-4264-987b-68763f2830ab", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8686dc9b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88", Pod:"calico-apiserver-8686dc9b89-f4rb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3468819bbfd", MAC:"76:86:82:98:01:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:18.022407 containerd[2555]: 2026-01-23 18:53:18.019 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-f4rb7" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--f4rb7-eth0" Jan 23 18:53:18.057611 containerd[2555]: time="2026-01-23T18:53:18.057578708Z" level=info msg="connecting to shim 5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88" address="unix:///run/containerd/s/bc8b0c1b2203d6a5ee925a48ee4bc33f7e91faec476d7dcfd8f81922ad00baf0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:18.077653 systemd[1]: Started cri-containerd-5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88.scope - libcontainer container 5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88. Jan 23 18:53:18.092517 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 23 18:53:18.092591 kernel: audit: type=1334 audit(1769194398.089:611): prog-id=204 op=LOAD Jan 23 18:53:18.089000 audit: BPF prog-id=204 op=LOAD Jan 23 18:53:18.089000 audit: BPF prog-id=205 op=LOAD Jan 23 18:53:18.098530 kernel: audit: type=1334 audit(1769194398.089:612): prog-id=205 op=LOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.109575 kernel: audit: type=1300 audit(1769194398.089:612): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.119503 kernel: audit: type=1327 audit(1769194398.089:612): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=205 op=UNLOAD Jan 23 18:53:18.126514 kernel: audit: type=1334 audit(1769194398.089:613): prog-id=205 op=UNLOAD Jan 23 18:53:18.137783 kernel: audit: type=1300 audit(1769194398.089:613): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.135354 systemd-networkd[2150]: cali180a11a20c3: Link UP Jan 23 18:53:18.136302 systemd-networkd[2150]: cali180a11a20c3: Gained carrier Jan 23 18:53:18.146615 kernel: audit: type=1327 audit(1769194398.089:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=206 op=LOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.157403 kernel: audit: type=1334 audit(1769194398.089:614): prog-id=206 op=LOAD Jan 23 18:53:18.157454 kernel: audit: type=1300 audit(1769194398.089:614): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.157593 containerd[2555]: time="2026-01-23T18:53:18.157512726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-f4rb7,Uid:c7f09343-3d0b-4264-987b-68763f2830ab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5da5d97edf3e4ff66194e4960d7780ea981e1fb4a4c2260a5d5f5c8376591e88\"" Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.159937 containerd[2555]: time="2026-01-23T18:53:18.159599407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:53:18.164567 kernel: audit: type=1327 audit(1769194398.089:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=207 op=LOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=207 op=UNLOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=206 op=UNLOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.089000 audit: BPF prog-id=208 op=LOAD Jan 23 18:53:18.089000 audit[5340]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5329 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564613564393765646633653466663636313934653439363064373738 Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.939 [INFO][5276] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.949 [INFO][5276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0 csi-node-driver- calico-system ad1b7350-c4c8-43d5-adb7-51075adcd4fd 685 0 2026-01-23 18:52:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa csi-node-driver-slbmv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali180a11a20c3 [] [] }} ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.949 [INFO][5276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.977 [INFO][5304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" HandleID="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.977 [INFO][5304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" HandleID="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"csi-node-driver-slbmv", "timestamp":"2026-01-23 18:53:17.977735837 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.977 [INFO][5304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.998 [INFO][5304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:17.998 [INFO][5304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.079 [INFO][5304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.084 [INFO][5304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.089 [INFO][5304] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.094 [INFO][5304] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.099 [INFO][5304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.099 [INFO][5304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.109 [INFO][5304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.121 [INFO][5304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.127 [INFO][5304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.127 [INFO][5304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:18.168470 containerd[2555]: 2026-01-23 18:53:18.127 [INFO][5304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.128 [INFO][5304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" HandleID="k8s-pod-network.31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.131 [INFO][5276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1b7350-c4c8-43d5-adb7-51075adcd4fd", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"csi-node-driver-slbmv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali180a11a20c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.131 [INFO][5276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.131 [INFO][5276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali180a11a20c3 ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.135 [INFO][5276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.169670 containerd[2555]: 2026-01-23 18:53:18.136 [INFO][5276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1b7350-c4c8-43d5-adb7-51075adcd4fd", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c", Pod:"csi-node-driver-slbmv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali180a11a20c3", MAC:"42:1b:e7:ff:5e:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:18.169885 containerd[2555]: 2026-01-23 18:53:18.162 [INFO][5276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" Namespace="calico-system" Pod="csi-node-driver-slbmv" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-csi--node--driver--slbmv-eth0" Jan 23 18:53:18.204261 containerd[2555]: time="2026-01-23T18:53:18.204222022Z" level=info msg="connecting to shim 31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c" address="unix:///run/containerd/s/dcccafe3b548ee1bb9b2fa4fd5ecd21a962493ef7ce2c8b4b51d088c4cda7713" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:18.228871 systemd[1]: Started cri-containerd-31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c.scope - libcontainer container 31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c. Jan 23 18:53:18.240000 audit: BPF prog-id=209 op=LOAD Jan 23 18:53:18.240000 audit: BPF prog-id=210 op=LOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=210 op=UNLOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=211 op=LOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=212 op=LOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=212 op=UNLOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=211 op=UNLOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.240000 audit: BPF prog-id=213 op=LOAD Jan 23 18:53:18.240000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5382 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:18.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633764343361666533616236633636613836346535633565386130 Jan 23 18:53:18.269810 containerd[2555]: time="2026-01-23T18:53:18.269683941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-slbmv,Uid:ad1b7350-c4c8-43d5-adb7-51075adcd4fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"31c7d43afe3ab6c66a864e5c5e8a0fe19fce83f3f8a0e249ea508b236b90f80c\"" Jan 23 18:53:18.425305 containerd[2555]: time="2026-01-23T18:53:18.425202079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:18.427804 containerd[2555]: time="2026-01-23T18:53:18.427774101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:53:18.427893 containerd[2555]: time="2026-01-23T18:53:18.427840941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:18.427985 kubelet[3990]: E0123 18:53:18.427953 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:18.428239 kubelet[3990]: E0123 18:53:18.427995 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:18.428239 kubelet[3990]: E0123 18:53:18.428207 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dq6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:18.428717 containerd[2555]: time="2026-01-23T18:53:18.428646433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:53:18.429345 kubelet[3990]: E0123 18:53:18.429317 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:18.693861 containerd[2555]: time="2026-01-23T18:53:18.693761179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:18.696334 containerd[2555]: time="2026-01-23T18:53:18.696289911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:53:18.696334 containerd[2555]: time="2026-01-23T18:53:18.696315485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:18.696527 kubelet[3990]: E0123 18:53:18.696472 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:18.696575 kubelet[3990]: E0123 18:53:18.696536 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:18.696733 kubelet[3990]: E0123 18:53:18.696696 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:18.699026 containerd[2555]: time="2026-01-23T18:53:18.698991439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:53:18.892038 containerd[2555]: time="2026-01-23T18:53:18.891989823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22mgp,Uid:12936b13-6ad9-4c1b-a913-2f3039ac097a,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:18.892156 containerd[2555]: time="2026-01-23T18:53:18.891989966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-kzk6x,Uid:c2f1acaa-9237-4a56-b34a-eb28ae8b7529,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:53:18.966827 containerd[2555]: time="2026-01-23T18:53:18.966629096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:18.969502 containerd[2555]: time="2026-01-23T18:53:18.969445181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:53:18.969666 containerd[2555]: time="2026-01-23T18:53:18.969645919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:18.969839 kubelet[3990]: E0123 18:53:18.969808 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:18.969891 kubelet[3990]: E0123 18:53:18.969850 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:18.970495 kubelet[3990]: E0123 18:53:18.969957 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:18.971829 kubelet[3990]: E0123 18:53:18.971225 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:19.020626 systemd-networkd[2150]: cali13072bf81e1: Link UP Jan 23 18:53:19.021615 systemd-networkd[2150]: cali13072bf81e1: Gained carrier Jan 23 18:53:19.028796 kubelet[3990]: E0123 18:53:19.028742 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:19.034638 kubelet[3990]: E0123 18:53:19.034596 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.946 [INFO][5439] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.957 [INFO][5439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0 goldmane-666569f655- calico-system 12936b13-6ad9-4c1b-a913-2f3039ac097a 798 0 2026-01-23 18:52:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa goldmane-666569f655-22mgp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali13072bf81e1 [] [] }} ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.957 [INFO][5439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.988 [INFO][5463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" HandleID="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.988 [INFO][5463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" HandleID="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"goldmane-666569f655-22mgp", "timestamp":"2026-01-23 18:53:18.988226207 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.988 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.988 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.988 [INFO][5463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.994 [INFO][5463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.996 [INFO][5463] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:18.999 [INFO][5463] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.000 [INFO][5463] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.002 [INFO][5463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.002 [INFO][5463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.003 [INFO][5463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3 Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.007 [INFO][5463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.039395 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" HandleID="k8s-pod-network.abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.017 [INFO][5439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12936b13-6ad9-4c1b-a913-2f3039ac097a", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"goldmane-666569f655-22mgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13072bf81e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.017 [INFO][5439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.017 [INFO][5439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13072bf81e1 ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.021 [INFO][5439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.039952 containerd[2555]: 2026-01-23 18:53:19.021 [INFO][5439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"12936b13-6ad9-4c1b-a913-2f3039ac097a", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3", Pod:"goldmane-666569f655-22mgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13072bf81e1", MAC:"0e:c1:80:08:d5:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:19.040163 containerd[2555]: 2026-01-23 18:53:19.036 [INFO][5439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" Namespace="calico-system" Pod="goldmane-666569f655-22mgp" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-goldmane--666569f655--22mgp-eth0" Jan 23 18:53:19.084000 audit[5491]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:19.084000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea13804c0 a2=0 a3=7ffea13804ac items=0 ppid=4095 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:19.087000 audit[5491]: NETFILTER_CFG table=nat:123 family=2 entries=12 op=nft_register_rule pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:19.087000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea13804c0 a2=0 a3=0 items=0 ppid=4095 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:19.089581 containerd[2555]: time="2026-01-23T18:53:19.089544657Z" level=info msg="connecting to shim abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3" address="unix:///run/containerd/s/215cac33ff1ee8c0744dd5e12bb195279ea06f3fba19c3add7e068b5a593b246" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:19.110091 systemd-networkd[2150]: cali3468819bbfd: Gained IPv6LL Jan 23 18:53:19.116637 systemd[1]: Started cri-containerd-abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3.scope - libcontainer container abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3. Jan 23 18:53:19.125000 audit: BPF prog-id=214 op=LOAD Jan 23 18:53:19.126000 audit: BPF prog-id=215 op=LOAD Jan 23 18:53:19.126000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.126000 audit: BPF prog-id=215 op=UNLOAD Jan 23 18:53:19.126000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.126000 audit: BPF prog-id=216 op=LOAD Jan 23 18:53:19.126000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.126000 audit: BPF prog-id=217 op=LOAD Jan 23 18:53:19.126000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.127000 audit: BPF prog-id=217 op=UNLOAD Jan 23 18:53:19.127000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.127000 audit: BPF prog-id=216 op=UNLOAD Jan 23 18:53:19.127000 audit[5505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.127000 audit: BPF prog-id=218 op=LOAD Jan 23 18:53:19.127000 audit[5505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5494 pid=5505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162653433656465356531336335613638653637366235323734633332 Jan 23 18:53:19.133902 systemd-networkd[2150]: calic76f875c940: Link UP Jan 23 18:53:19.134889 systemd-networkd[2150]: calic76f875c940: Gained carrier Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.951 [INFO][5443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.960 [INFO][5443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0 calico-apiserver-8686dc9b89- calico-apiserver c2f1acaa-9237-4a56-b34a-eb28ae8b7529 796 0 2026-01-23 18:52:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8686dc9b89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa calico-apiserver-8686dc9b89-kzk6x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic76f875c940 [] [] }} ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.960 [INFO][5443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.993 [INFO][5468] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" HandleID="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.993 [INFO][5468] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" HandleID="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"calico-apiserver-8686dc9b89-kzk6x", "timestamp":"2026-01-23 18:53:18.993563953 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:18.993 [INFO][5468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.015 [INFO][5468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.097 [INFO][5468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.101 [INFO][5468] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.108 [INFO][5468] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.111 [INFO][5468] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.113 [INFO][5468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.113 [INFO][5468] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.115 [INFO][5468] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388 Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.118 [INFO][5468] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.128 [INFO][5468] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.129 [INFO][5468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:19.154080 containerd[2555]: 2026-01-23 18:53:19.130 [INFO][5468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:19.154638 containerd[2555]: 2026-01-23 18:53:19.130 [INFO][5468] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" HandleID="k8s-pod-network.a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154638 containerd[2555]: 2026-01-23 18:53:19.131 [INFO][5443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0", GenerateName:"calico-apiserver-8686dc9b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2f1acaa-9237-4a56-b34a-eb28ae8b7529", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8686dc9b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"calico-apiserver-8686dc9b89-kzk6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic76f875c940", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:19.154638 containerd[2555]: 2026-01-23 18:53:19.132 [INFO][5443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154638 containerd[2555]: 2026-01-23 18:53:19.132 [INFO][5443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic76f875c940 ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154638 containerd[2555]: 2026-01-23 18:53:19.135 [INFO][5443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.154811 containerd[2555]: 2026-01-23 18:53:19.137 [INFO][5443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0", GenerateName:"calico-apiserver-8686dc9b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2f1acaa-9237-4a56-b34a-eb28ae8b7529", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8686dc9b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388", Pod:"calico-apiserver-8686dc9b89-kzk6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic76f875c940", MAC:"b2:6a:18:51:a4:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:19.154811 containerd[2555]: 2026-01-23 18:53:19.151 [INFO][5443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" Namespace="calico-apiserver" Pod="calico-apiserver-8686dc9b89-kzk6x" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--apiserver--8686dc9b89--kzk6x-eth0" Jan 23 18:53:19.174286 containerd[2555]: time="2026-01-23T18:53:19.174258949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22mgp,Uid:12936b13-6ad9-4c1b-a913-2f3039ac097a,Namespace:calico-system,Attempt:0,} returns sandbox id \"abe43ede5e13c5a68e676b5274c32c3fa8ea1b07f1db8d2a5a0a9fda5b2cf6f3\"" Jan 23 18:53:19.175328 containerd[2555]: time="2026-01-23T18:53:19.175255723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:53:19.197663 containerd[2555]: time="2026-01-23T18:53:19.197625387Z" level=info msg="connecting to shim a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388" address="unix:///run/containerd/s/4eeb09d85a8c272283d7794a49c29d58b02e9d7e8cac852dc46fad6a23ada401" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:19.216657 systemd[1]: Started cri-containerd-a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388.scope - libcontainer container a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388. Jan 23 18:53:19.226000 audit: BPF prog-id=219 op=LOAD Jan 23 18:53:19.227000 audit: BPF prog-id=220 op=LOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=220 op=UNLOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=221 op=LOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=222 op=LOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=222 op=UNLOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=221 op=UNLOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.227000 audit: BPF prog-id=223 op=LOAD Jan 23 18:53:19.227000 audit[5556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5545 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:19.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130646465613763313931326364623662626461386538313032326261 Jan 23 18:53:19.260011 containerd[2555]: time="2026-01-23T18:53:19.259943416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8686dc9b89-kzk6x,Uid:c2f1acaa-9237-4a56-b34a-eb28ae8b7529,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a0ddea7c1912cdb6bbda8e81022baad3d91152486053dcc9f0f0dec7b5a64388\"" Jan 23 18:53:19.301650 systemd-networkd[2150]: cali180a11a20c3: Gained IPv6LL Jan 23 18:53:19.459406 containerd[2555]: time="2026-01-23T18:53:19.459371096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:19.462707 containerd[2555]: time="2026-01-23T18:53:19.462671154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:53:19.462841 containerd[2555]: time="2026-01-23T18:53:19.462679014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:19.462893 kubelet[3990]: E0123 18:53:19.462862 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:53:19.463143 kubelet[3990]: E0123 18:53:19.462906 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:53:19.463315 containerd[2555]: time="2026-01-23T18:53:19.463293314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:53:19.463551 kubelet[3990]: E0123 18:53:19.463501 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82cb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:19.464782 kubelet[3990]: E0123 18:53:19.464726 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:19.734951 containerd[2555]: time="2026-01-23T18:53:19.734834875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:19.737342 containerd[2555]: time="2026-01-23T18:53:19.737304891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:53:19.737425 containerd[2555]: time="2026-01-23T18:53:19.737365634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:19.737542 kubelet[3990]: E0123 18:53:19.737472 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:19.737599 kubelet[3990]: E0123 18:53:19.737549 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:19.737814 kubelet[3990]: E0123 18:53:19.737700 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x9br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:19.739013 kubelet[3990]: E0123 18:53:19.738977 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:19.892172 containerd[2555]: time="2026-01-23T18:53:19.891697391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658c89489-trg8d,Uid:0fe1ccdb-f11d-478d-b8c5-50e7a678ae44,Namespace:calico-system,Attempt:0,}" Jan 23 18:53:19.892490 containerd[2555]: time="2026-01-23T18:53:19.892440223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57fbd,Uid:b95b576c-0021-4070-9f4b-cf851ec9d8b5,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:19.892594 containerd[2555]: time="2026-01-23T18:53:19.892578316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2zwxx,Uid:065ad94c-6bc1-4cb8-8e5f-8e21ce855f36,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:20.045361 kubelet[3990]: E0123 18:53:20.043701 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:20.045784 systemd-networkd[2150]: cali5fceb2c606b: Link UP Jan 23 18:53:20.045990 systemd-networkd[2150]: cali5fceb2c606b: Gained carrier Jan 23 18:53:20.050457 kubelet[3990]: E0123 18:53:20.050406 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:20.050677 kubelet[3990]: E0123 18:53:20.050621 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:20.052827 kubelet[3990]: E0123 18:53:20.052763 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.944 [INFO][5604] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.963 [INFO][5604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0 coredns-668d6bf9bc- kube-system b95b576c-0021-4070-9f4b-cf851ec9d8b5 788 0 2026-01-23 18:52:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa coredns-668d6bf9bc-57fbd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fceb2c606b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.963 [INFO][5604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.994 [INFO][5644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" HandleID="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.994 [INFO][5644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" HandleID="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"coredns-668d6bf9bc-57fbd", "timestamp":"2026-01-23 18:53:19.994570099 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.994 [INFO][5644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.994 [INFO][5644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:19.994 [INFO][5644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.001 [INFO][5644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.007 [INFO][5644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.014 [INFO][5644] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.016 [INFO][5644] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.019 [INFO][5644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.019 [INFO][5644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.020 [INFO][5644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809 Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.025 [INFO][5644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.070907 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:20.072382 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" HandleID="k8s-pod-network.2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.072382 containerd[2555]: 2026-01-23 18:53:20.037 [INFO][5604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b95b576c-0021-4070-9f4b-cf851ec9d8b5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"coredns-668d6bf9bc-57fbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fceb2c606b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.072382 containerd[2555]: 2026-01-23 18:53:20.037 [INFO][5604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.072382 containerd[2555]: 2026-01-23 18:53:20.037 [INFO][5604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fceb2c606b ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.072382 containerd[2555]: 2026-01-23 18:53:20.045 [INFO][5604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.072996 containerd[2555]: 2026-01-23 18:53:20.049 [INFO][5604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b95b576c-0021-4070-9f4b-cf851ec9d8b5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809", Pod:"coredns-668d6bf9bc-57fbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fceb2c606b", MAC:"5e:ab:1f:95:bb:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.072996 containerd[2555]: 2026-01-23 18:53:20.068 [INFO][5604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" Namespace="kube-system" Pod="coredns-668d6bf9bc-57fbd" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--57fbd-eth0" Jan 23 18:53:20.080000 audit[5671]: NETFILTER_CFG table=filter:124 family=2 entries=22 op=nft_register_rule pid=5671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:20.080000 audit[5671]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcecfb0cd0 a2=0 a3=7ffcecfb0cbc items=0 ppid=4095 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:20.085000 audit[5671]: NETFILTER_CFG table=nat:125 family=2 entries=12 op=nft_register_rule pid=5671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:20.085000 audit[5671]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcecfb0cd0 a2=0 a3=0 items=0 ppid=4095 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:20.131623 containerd[2555]: time="2026-01-23T18:53:20.131584508Z" level=info msg="connecting to shim 2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809" address="unix:///run/containerd/s/b205ccc69ed02be089f2e114cd43bb33eac6e5354f5776767b1bcfb8d1247afd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:20.129000 audit[5680]: NETFILTER_CFG table=filter:126 family=2 entries=22 op=nft_register_rule pid=5680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:20.129000 audit[5680]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc48fb9f20 a2=0 a3=7ffc48fb9f0c items=0 ppid=4095 pid=5680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:20.134000 audit[5680]: NETFILTER_CFG table=nat:127 family=2 entries=12 op=nft_register_rule pid=5680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:20.134000 audit[5680]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc48fb9f20 a2=0 a3=0 items=0 ppid=4095 pid=5680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:20.173299 systemd[1]: Started cri-containerd-2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809.scope - libcontainer container 2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809. Jan 23 18:53:20.185303 systemd-networkd[2150]: cali5d55cf4b825: Link UP Jan 23 18:53:20.187115 systemd-networkd[2150]: cali5d55cf4b825: Gained carrier Jan 23 18:53:20.189000 audit: BPF prog-id=224 op=LOAD Jan 23 18:53:20.190000 audit: BPF prog-id=225 op=LOAD Jan 23 18:53:20.190000 audit[5694]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=225 op=UNLOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=226 op=LOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=227 op=LOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=227 op=UNLOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=226 op=UNLOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.195000 audit: BPF prog-id=228 op=LOAD Jan 23 18:53:20.195000 audit[5694]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5682 pid=5694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303563656239313533613335393762306665323566613866653736 Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:19.949 [INFO][5626] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:19.962 [INFO][5626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0 coredns-668d6bf9bc- kube-system 065ad94c-6bc1-4cb8-8e5f-8e21ce855f36 797 0 2026-01-23 18:52:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa coredns-668d6bf9bc-2zwxx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d55cf4b825 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:19.962 [INFO][5626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.015 [INFO][5646] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" HandleID="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.015 [INFO][5646] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" HandleID="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"coredns-668d6bf9bc-2zwxx", "timestamp":"2026-01-23 18:53:20.015420431 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.015 [INFO][5646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.033 [INFO][5646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.105 [INFO][5646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.130 [INFO][5646] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.137 [INFO][5646] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.139 [INFO][5646] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.146 [INFO][5646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.146 [INFO][5646] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.148 [INFO][5646] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.153 [INFO][5646] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.165 [INFO][5646] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.165 [INFO][5646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.206993 containerd[2555]: 2026-01-23 18:53:20.165 [INFO][5646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:20.208189 containerd[2555]: 2026-01-23 18:53:20.165 [INFO][5646] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" HandleID="k8s-pod-network.35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.208189 containerd[2555]: 2026-01-23 18:53:20.176 [INFO][5626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"065ad94c-6bc1-4cb8-8e5f-8e21ce855f36", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"coredns-668d6bf9bc-2zwxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d55cf4b825", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.208189 containerd[2555]: 2026-01-23 18:53:20.177 [INFO][5626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.208189 containerd[2555]: 2026-01-23 18:53:20.178 [INFO][5626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d55cf4b825 ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.208189 containerd[2555]: 2026-01-23 18:53:20.188 [INFO][5626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.208386 containerd[2555]: 2026-01-23 18:53:20.189 [INFO][5626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"065ad94c-6bc1-4cb8-8e5f-8e21ce855f36", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e", Pod:"coredns-668d6bf9bc-2zwxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d55cf4b825", MAC:"ca:97:a9:ed:a7:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.208386 containerd[2555]: 2026-01-23 18:53:20.203 [INFO][5626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2zwxx" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-coredns--668d6bf9bc--2zwxx-eth0" Jan 23 18:53:20.263031 containerd[2555]: time="2026-01-23T18:53:20.262967241Z" level=info msg="connecting to shim 35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e" address="unix:///run/containerd/s/295f5aee1f2c3d6a8c4332096ed189ac81c364ebad0a234a78909fe093f255b2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:20.268565 containerd[2555]: time="2026-01-23T18:53:20.268522564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57fbd,Uid:b95b576c-0021-4070-9f4b-cf851ec9d8b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809\"" Jan 23 18:53:20.276649 containerd[2555]: time="2026-01-23T18:53:20.275971758Z" level=info msg="CreateContainer within sandbox \"2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:53:20.276802 systemd-networkd[2150]: cali28b1597d553: Link UP Jan 23 18:53:20.277354 systemd-networkd[2150]: cali28b1597d553: Gained carrier Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:19.954 [INFO][5615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:19.966 [INFO][5615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0 calico-kube-controllers-6658c89489- calico-system 0fe1ccdb-f11d-478d-b8c5-50e7a678ae44 793 0 2026-01-23 18:52:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6658c89489 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.1.0-a-90f1f3b2aa calico-kube-controllers-6658c89489-trg8d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali28b1597d553 [] [] }} ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:19.966 [INFO][5615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.017 [INFO][5651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" HandleID="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.019 [INFO][5651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" HandleID="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-90f1f3b2aa", "pod":"calico-kube-controllers-6658c89489-trg8d", "timestamp":"2026-01-23 18:53:20.017445714 +0000 UTC"}, Hostname:"ci-4547.1.0-a-90f1f3b2aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.019 [INFO][5651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.166 [INFO][5651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.166 [INFO][5651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-90f1f3b2aa' Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.204 [INFO][5651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.228 [INFO][5651] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.235 [INFO][5651] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.237 [INFO][5651] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.244 [INFO][5651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.245 [INFO][5651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.250 [INFO][5651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8 Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.256 [INFO][5651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.268 [INFO][5651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.268 [INFO][5651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" host="ci-4547.1.0-a-90f1f3b2aa" Jan 23 18:53:20.296136 containerd[2555]: 2026-01-23 18:53:20.268 [INFO][5651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:53:20.297522 containerd[2555]: 2026-01-23 18:53:20.268 [INFO][5651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" HandleID="k8s-pod-network.57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Workload="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.297522 containerd[2555]: 2026-01-23 18:53:20.270 [INFO][5615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0", GenerateName:"calico-kube-controllers-6658c89489-", Namespace:"calico-system", SelfLink:"", UID:"0fe1ccdb-f11d-478d-b8c5-50e7a678ae44", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6658c89489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"", Pod:"calico-kube-controllers-6658c89489-trg8d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28b1597d553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.297522 containerd[2555]: 2026-01-23 18:53:20.270 [INFO][5615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.297522 containerd[2555]: 2026-01-23 18:53:20.270 [INFO][5615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b1597d553 ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.297522 containerd[2555]: 2026-01-23 18:53:20.277 [INFO][5615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.297696 containerd[2555]: 2026-01-23 18:53:20.279 [INFO][5615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0", GenerateName:"calico-kube-controllers-6658c89489-", Namespace:"calico-system", SelfLink:"", UID:"0fe1ccdb-f11d-478d-b8c5-50e7a678ae44", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6658c89489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-90f1f3b2aa", ContainerID:"57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8", Pod:"calico-kube-controllers-6658c89489-trg8d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28b1597d553", MAC:"6a:34:d4:19:69:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:53:20.297696 containerd[2555]: 2026-01-23 18:53:20.292 [INFO][5615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" Namespace="calico-system" Pod="calico-kube-controllers-6658c89489-trg8d" WorkloadEndpoint="ci--4547.1.0--a--90f1f3b2aa-k8s-calico--kube--controllers--6658c89489--trg8d-eth0" Jan 23 18:53:20.300624 containerd[2555]: time="2026-01-23T18:53:20.299428161Z" level=info msg="Container 9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:20.302852 systemd[1]: Started cri-containerd-35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e.scope - libcontainer container 35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e. Jan 23 18:53:20.310000 audit: BPF prog-id=229 op=LOAD Jan 23 18:53:20.311000 audit: BPF prog-id=230 op=LOAD Jan 23 18:53:20.311000 audit[5747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.311000 audit: BPF prog-id=230 op=UNLOAD Jan 23 18:53:20.311000 audit[5747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.311000 audit: BPF prog-id=231 op=LOAD Jan 23 18:53:20.311000 audit[5747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.312000 audit: BPF prog-id=232 op=LOAD Jan 23 18:53:20.312000 audit[5747]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.312000 audit: BPF prog-id=232 op=UNLOAD Jan 23 18:53:20.312000 audit[5747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.312000 audit: BPF prog-id=231 op=UNLOAD Jan 23 18:53:20.312000 audit[5747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.312000 audit: BPF prog-id=233 op=LOAD Jan 23 18:53:20.312000 audit[5747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5734 pid=5747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335613037366431613464373061313463656632383666653836313461 Jan 23 18:53:20.318865 containerd[2555]: time="2026-01-23T18:53:20.318815995Z" level=info msg="CreateContainer within sandbox \"2f05ceb9153a3597b0fe25fa8fe76d57cde79b2f025442d6a6037efed0169809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981\"" Jan 23 18:53:20.320007 containerd[2555]: time="2026-01-23T18:53:20.319602411Z" level=info msg="StartContainer for \"9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981\"" Jan 23 18:53:20.320344 containerd[2555]: time="2026-01-23T18:53:20.320286931Z" level=info msg="connecting to shim 9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981" address="unix:///run/containerd/s/b205ccc69ed02be089f2e114cd43bb33eac6e5354f5776767b1bcfb8d1247afd" protocol=ttrpc version=3 Jan 23 18:53:20.339638 systemd[1]: Started cri-containerd-9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981.scope - libcontainer container 9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981. Jan 23 18:53:20.357166 containerd[2555]: time="2026-01-23T18:53:20.357108766Z" level=info msg="connecting to shim 57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8" address="unix:///run/containerd/s/8a2f46b4c26928f97207073812ceb5e29166afd14cd0649f8d0dacc9d120c42d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:20.364000 audit: BPF prog-id=234 op=LOAD Jan 23 18:53:20.365000 audit: BPF prog-id=235 op=LOAD Jan 23 18:53:20.365000 audit[5771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.365000 audit: BPF prog-id=235 op=UNLOAD Jan 23 18:53:20.365000 audit[5771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.365000 audit: BPF prog-id=236 op=LOAD Jan 23 18:53:20.365000 audit[5771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.366000 audit: BPF prog-id=237 op=LOAD Jan 23 18:53:20.366000 audit[5771]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.366000 audit: BPF prog-id=237 op=UNLOAD Jan 23 18:53:20.366000 audit[5771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.366000 audit: BPF prog-id=236 op=UNLOAD Jan 23 18:53:20.366000 audit[5771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.367000 audit: BPF prog-id=238 op=LOAD Jan 23 18:53:20.367000 audit[5771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5682 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373166386365626137393035306133666665326564303030373733 Jan 23 18:53:20.384852 containerd[2555]: time="2026-01-23T18:53:20.384776416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2zwxx,Uid:065ad94c-6bc1-4cb8-8e5f-8e21ce855f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e\"" Jan 23 18:53:20.390692 containerd[2555]: time="2026-01-23T18:53:20.390669210Z" level=info msg="CreateContainer within sandbox \"35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:53:20.414697 systemd[1]: Started cri-containerd-57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8.scope - libcontainer container 57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8. Jan 23 18:53:20.417723 containerd[2555]: time="2026-01-23T18:53:20.417696028Z" level=info msg="StartContainer for \"9571f8ceba79050a3ffe2ed0007732672c341125a35273d5fb5bddca8dcca981\" returns successfully" Jan 23 18:53:20.422527 containerd[2555]: time="2026-01-23T18:53:20.421613483Z" level=info msg="Container fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:20.433632 containerd[2555]: time="2026-01-23T18:53:20.433541651Z" level=info msg="CreateContainer within sandbox \"35a076d1a4d70a14cef286fe8614a44cf1605207e5879079c7a39c733ef6642e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e\"" Jan 23 18:53:20.435522 containerd[2555]: time="2026-01-23T18:53:20.434061915Z" level=info msg="StartContainer for \"fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e\"" Jan 23 18:53:20.436094 containerd[2555]: time="2026-01-23T18:53:20.436057346Z" level=info msg="connecting to shim fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e" address="unix:///run/containerd/s/295f5aee1f2c3d6a8c4332096ed189ac81c364ebad0a234a78909fe093f255b2" protocol=ttrpc version=3 Jan 23 18:53:20.453574 systemd-networkd[2150]: cali13072bf81e1: Gained IPv6LL Jan 23 18:53:20.454890 systemd[1]: Started cri-containerd-fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e.scope - libcontainer container fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e. Jan 23 18:53:20.465000 audit: BPF prog-id=239 op=LOAD Jan 23 18:53:20.466000 audit: BPF prog-id=240 op=LOAD Jan 23 18:53:20.466000 audit[5847]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.466000 audit: BPF prog-id=240 op=UNLOAD Jan 23 18:53:20.466000 audit[5847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.467000 audit: BPF prog-id=241 op=LOAD Jan 23 18:53:20.467000 audit[5847]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.467000 audit: BPF prog-id=242 op=LOAD Jan 23 18:53:20.467000 audit[5847]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.467000 audit: BPF prog-id=242 op=UNLOAD Jan 23 18:53:20.467000 audit[5847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.467000 audit: BPF prog-id=241 op=UNLOAD Jan 23 18:53:20.467000 audit[5847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.467000 audit: BPF prog-id=243 op=LOAD Jan 23 18:53:20.467000 audit[5847]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5734 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313130613235346235626239353935373261396332383561613530 Jan 23 18:53:20.500250 containerd[2555]: time="2026-01-23T18:53:20.500219894Z" level=info msg="StartContainer for \"fd110a254b5bb959572a9c285aa50f70351b803c4a21216520f798645dbc090e\" returns successfully" Jan 23 18:53:20.517562 systemd-networkd[2150]: calic76f875c940: Gained IPv6LL Jan 23 18:53:20.530000 audit: BPF prog-id=244 op=LOAD Jan 23 18:53:20.530000 audit: BPF prog-id=245 op=LOAD Jan 23 18:53:20.530000 audit[5819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.530000 audit: BPF prog-id=245 op=UNLOAD Jan 23 18:53:20.530000 audit[5819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.530000 audit: BPF prog-id=246 op=LOAD Jan 23 18:53:20.530000 audit[5819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.530000 audit: BPF prog-id=247 op=LOAD Jan 23 18:53:20.530000 audit[5819]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.530000 audit: BPF prog-id=247 op=UNLOAD Jan 23 18:53:20.530000 audit[5819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.531000 audit: BPF prog-id=246 op=UNLOAD Jan 23 18:53:20.531000 audit[5819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.531000 audit: BPF prog-id=248 op=LOAD Jan 23 18:53:20.531000 audit[5819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=5801 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:20.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633139396439336235313339366261386337316133353036373062 Jan 23 18:53:20.595731 containerd[2555]: time="2026-01-23T18:53:20.595655867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658c89489-trg8d,Uid:0fe1ccdb-f11d-478d-b8c5-50e7a678ae44,Namespace:calico-system,Attempt:0,} returns sandbox id \"57c199d93b51396ba8c71a350670be5800783ee65b704287f13131cb9eaad3b8\"" Jan 23 18:53:20.598440 containerd[2555]: time="2026-01-23T18:53:20.598417367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:53:20.848681 containerd[2555]: time="2026-01-23T18:53:20.848503022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:20.850866 containerd[2555]: time="2026-01-23T18:53:20.850835948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:53:20.850942 containerd[2555]: time="2026-01-23T18:53:20.850907045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:20.851083 kubelet[3990]: E0123 18:53:20.851038 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:53:20.851342 kubelet[3990]: E0123 18:53:20.851095 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:53:20.851637 kubelet[3990]: E0123 18:53:20.851590 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndlkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:20.852781 kubelet[3990]: E0123 18:53:20.852750 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:21.052266 kubelet[3990]: E0123 18:53:21.052173 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:21.058544 kubelet[3990]: E0123 18:53:21.058280 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:21.059246 kubelet[3990]: E0123 18:53:21.059169 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:21.094519 kubelet[3990]: I0123 18:53:21.094445 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-57fbd" podStartSLOduration=41.094431864 podStartE2EDuration="41.094431864s" podCreationTimestamp="2026-01-23 18:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:53:21.0941917 +0000 UTC m=+47.297827106" watchObservedRunningTime="2026-01-23 18:53:21.094431864 +0000 UTC m=+47.298067266" Jan 23 18:53:21.111000 audit[5908]: NETFILTER_CFG table=filter:128 family=2 entries=19 op=nft_register_rule pid=5908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:21.111000 audit[5908]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe2de669c0 a2=0 a3=7ffe2de669ac items=0 ppid=4095 pid=5908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:21.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:21.116000 audit[5908]: NETFILTER_CFG table=nat:129 family=2 entries=33 op=nft_register_chain pid=5908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:21.116000 audit[5908]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffe2de669c0 a2=0 a3=7ffe2de669ac items=0 ppid=4095 pid=5908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:21.116000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:21.285591 systemd-networkd[2150]: cali5fceb2c606b: Gained IPv6LL Jan 23 18:53:21.797597 systemd-networkd[2150]: cali5d55cf4b825: Gained IPv6LL Jan 23 18:53:21.797859 systemd-networkd[2150]: cali28b1597d553: Gained IPv6LL Jan 23 18:53:22.065022 kubelet[3990]: E0123 18:53:22.064909 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:22.084109 kubelet[3990]: I0123 18:53:22.084052 3990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2zwxx" podStartSLOduration=42.084037086 podStartE2EDuration="42.084037086s" podCreationTimestamp="2026-01-23 18:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:53:21.125217089 +0000 UTC m=+47.328852493" watchObservedRunningTime="2026-01-23 18:53:22.084037086 +0000 UTC m=+48.287672494" Jan 23 18:53:22.217000 audit[5931]: NETFILTER_CFG table=filter:130 family=2 entries=16 op=nft_register_rule pid=5931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:22.217000 audit[5931]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8e2afa40 a2=0 a3=7fff8e2afa2c items=0 ppid=4095 pid=5931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:22.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:22.267000 audit[5931]: NETFILTER_CFG table=nat:131 family=2 entries=54 op=nft_register_chain pid=5931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:22.267000 audit[5931]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7fff8e2afa40 a2=0 a3=7fff8e2afa2c items=0 ppid=4095 pid=5931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:22.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:22.375647 kubelet[3990]: I0123 18:53:22.375119 3990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:53:23.287000 audit[5970]: NETFILTER_CFG table=filter:132 family=2 entries=15 op=nft_register_rule pid=5970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:23.290544 kernel: kauditd_printk_skb: 218 callbacks suppressed Jan 23 18:53:23.290714 kernel: audit: type=1325 audit(1769194403.287:693): table=filter:132 family=2 entries=15 op=nft_register_rule pid=5970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:23.287000 audit[5970]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc0062830 a2=0 a3=7fffc006281c items=0 ppid=4095 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.300607 kernel: audit: type=1300 audit(1769194403.287:693): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc0062830 a2=0 a3=7fffc006281c items=0 ppid=4095 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:23.313867 kernel: audit: type=1327 audit(1769194403.287:693): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:23.295000 audit[5970]: NETFILTER_CFG table=nat:133 family=2 entries=25 op=nft_register_chain pid=5970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:23.317828 kernel: audit: type=1325 audit(1769194403.295:694): table=nat:133 family=2 entries=25 op=nft_register_chain pid=5970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:53:23.328753 kernel: audit: type=1300 audit(1769194403.295:694): arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fffc0062830 a2=0 a3=7fffc006281c items=0 ppid=4095 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.295000 audit[5970]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fffc0062830 a2=0 a3=7fffc006281c items=0 ppid=4095 pid=5970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.333494 kernel: audit: type=1327 audit(1769194403.295:694): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:23.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:53:23.404000 audit: BPF prog-id=249 op=LOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea5511e0 a2=98 a3=1fffffffffffffff items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.412526 kernel: audit: type=1334 audit(1769194403.404:695): prog-id=249 op=LOAD Jan 23 18:53:23.412585 kernel: audit: type=1300 audit(1769194403.404:695): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea5511e0 a2=98 a3=1fffffffffffffff items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.417552 kernel: audit: type=1327 audit(1769194403.404:695): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.421293 kernel: audit: type=1334 audit(1769194403.404:696): prog-id=249 op=UNLOAD Jan 23 18:53:23.404000 audit: BPF prog-id=249 op=UNLOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdea5511b0 a3=0 items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.404000 audit: BPF prog-id=250 op=LOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea5510c0 a2=94 a3=3 items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.404000 audit: BPF prog-id=250 op=UNLOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdea5510c0 a2=94 a3=3 items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.404000 audit: BPF prog-id=251 op=LOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea551100 a2=94 a3=7ffdea5512e0 items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.404000 audit: BPF prog-id=251 op=UNLOAD Jan 23 18:53:23.404000 audit[5987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdea551100 a2=94 a3=7ffdea5512e0 items=0 ppid=5971 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.404000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:53:23.416000 audit: BPF prog-id=252 op=LOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfccf22e0 a2=98 a3=3 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.416000 audit: BPF prog-id=252 op=UNLOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcfccf22b0 a3=0 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.416000 audit: BPF prog-id=253 op=LOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcfccf20d0 a2=94 a3=54428f items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.416000 audit: BPF prog-id=253 op=UNLOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcfccf20d0 a2=94 a3=54428f items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.416000 audit: BPF prog-id=254 op=LOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcfccf2100 a2=94 a3=2 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.416000 audit: BPF prog-id=254 op=UNLOAD Jan 23 18:53:23.416000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcfccf2100 a2=0 a3=2 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.565000 audit: BPF prog-id=255 op=LOAD Jan 23 18:53:23.565000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcfccf1fc0 a2=94 a3=1 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.565000 audit: BPF prog-id=255 op=UNLOAD Jan 23 18:53:23.565000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcfccf1fc0 a2=94 a3=1 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.575000 audit: BPF prog-id=256 op=LOAD Jan 23 18:53:23.575000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcfccf1fb0 a2=94 a3=4 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.575000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.575000 audit: BPF prog-id=256 op=UNLOAD Jan 23 18:53:23.575000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcfccf1fb0 a2=0 a3=4 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.575000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=257 op=LOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcfccf1e10 a2=94 a3=5 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=257 op=UNLOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcfccf1e10 a2=0 a3=5 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=258 op=LOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcfccf2030 a2=94 a3=6 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=258 op=UNLOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcfccf2030 a2=0 a3=6 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=259 op=LOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcfccf17e0 a2=94 a3=88 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=260 op=LOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcfccf1660 a2=94 a3=2 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.576000 audit: BPF prog-id=260 op=UNLOAD Jan 23 18:53:23.576000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcfccf1690 a2=0 a3=7ffcfccf1790 items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.577000 audit: BPF prog-id=259 op=UNLOAD Jan 23 18:53:23.577000 audit[5988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2fcc9d10 a2=0 a3=f672ee334b73b31e items=0 ppid=5971 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.577000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:53:23.583000 audit: BPF prog-id=261 op=LOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9350c0b0 a2=98 a3=1999999999999999 items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.583000 audit: BPF prog-id=261 op=UNLOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9350c080 a3=0 items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.583000 audit: BPF prog-id=262 op=LOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9350bf90 a2=94 a3=ffff items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.583000 audit: BPF prog-id=262 op=UNLOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9350bf90 a2=94 a3=ffff items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.583000 audit: BPF prog-id=263 op=LOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9350bfd0 a2=94 a3=7fff9350c1b0 items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.583000 audit: BPF prog-id=263 op=UNLOAD Jan 23 18:53:23.583000 audit[5991]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9350bfd0 a2=94 a3=7fff9350c1b0 items=0 ppid=5971 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.583000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:53:23.646000 audit: BPF prog-id=264 op=LOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc25cc8aa0 a2=98 a3=0 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=264 op=UNLOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc25cc8a70 a3=0 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=265 op=LOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc25cc88b0 a2=94 a3=54428f items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=265 op=UNLOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc25cc88b0 a2=94 a3=54428f items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=266 op=LOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc25cc88e0 a2=94 a3=2 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=266 op=UNLOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc25cc88e0 a2=0 a3=2 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=267 op=LOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc25cc8690 a2=94 a3=4 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=267 op=UNLOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc25cc8690 a2=94 a3=4 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=268 op=LOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc25cc8790 a2=94 a3=7ffc25cc8910 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.646000 audit: BPF prog-id=268 op=UNLOAD Jan 23 18:53:23.646000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc25cc8790 a2=0 a3=7ffc25cc8910 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.646000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.647000 audit: BPF prog-id=269 op=LOAD Jan 23 18:53:23.647000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc25cc7ec0 a2=94 a3=2 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.647000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.647000 audit: BPF prog-id=269 op=UNLOAD Jan 23 18:53:23.647000 audit[6014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc25cc7ec0 a2=0 a3=2 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.647000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.647000 audit: BPF prog-id=270 op=LOAD Jan 23 18:53:23.647000 audit[6014]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc25cc7fc0 a2=94 a3=30 items=0 ppid=5971 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.647000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:53:23.654000 audit: BPF prog-id=271 op=LOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8a54b040 a2=98 a3=0 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.654000 audit: BPF prog-id=271 op=UNLOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc8a54b010 a3=0 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.654000 audit: BPF prog-id=272 op=LOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8a54ae30 a2=94 a3=54428f items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.654000 audit: BPF prog-id=272 op=UNLOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc8a54ae30 a2=94 a3=54428f items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.654000 audit: BPF prog-id=273 op=LOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8a54ae60 a2=94 a3=2 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.654000 audit: BPF prog-id=273 op=UNLOAD Jan 23 18:53:23.654000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc8a54ae60 a2=0 a3=2 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.654000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.678258 systemd-networkd[2150]: vxlan.calico: Link UP Jan 23 18:53:23.678265 systemd-networkd[2150]: vxlan.calico: Gained carrier Jan 23 18:53:23.806000 audit: BPF prog-id=274 op=LOAD Jan 23 18:53:23.806000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8a54ad20 a2=94 a3=1 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.806000 audit: BPF prog-id=274 op=UNLOAD Jan 23 18:53:23.806000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc8a54ad20 a2=94 a3=1 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.815000 audit: BPF prog-id=275 op=LOAD Jan 23 18:53:23.815000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc8a54ad10 a2=94 a3=4 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.815000 audit: BPF prog-id=275 op=UNLOAD Jan 23 18:53:23.815000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc8a54ad10 a2=0 a3=4 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.816000 audit: BPF prog-id=276 op=LOAD Jan 23 18:53:23.816000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8a54ab70 a2=94 a3=5 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.816000 audit: BPF prog-id=276 op=UNLOAD Jan 23 18:53:23.816000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc8a54ab70 a2=0 a3=5 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.816000 audit: BPF prog-id=277 op=LOAD Jan 23 18:53:23.816000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc8a54ad90 a2=94 a3=6 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.816000 audit: BPF prog-id=277 op=UNLOAD Jan 23 18:53:23.816000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc8a54ad90 a2=0 a3=6 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.816000 audit: BPF prog-id=278 op=LOAD Jan 23 18:53:23.816000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc8a54a540 a2=94 a3=88 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.817000 audit: BPF prog-id=279 op=LOAD Jan 23 18:53:23.817000 audit[6018]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc8a54a3c0 a2=94 a3=2 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.817000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.817000 audit: BPF prog-id=279 op=UNLOAD Jan 23 18:53:23.817000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc8a54a3f0 a2=0 a3=7ffc8a54a4f0 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.817000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.818000 audit: BPF prog-id=278 op=UNLOAD Jan 23 18:53:23.818000 audit[6018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=34383d10 a2=0 a3=798b116eb8d9010 items=0 ppid=5971 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:53:23.824000 audit: BPF prog-id=270 op=UNLOAD Jan 23 18:53:23.824000 audit[5971]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000c5a000 a2=0 a3=0 items=0 ppid=5140 pid=5971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.824000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 18:53:23.941000 audit[6056]: NETFILTER_CFG table=mangle:134 family=2 entries=16 op=nft_register_chain pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:53:23.941000 audit[6056]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdef7d8400 a2=0 a3=7ffdef7d83ec items=0 ppid=5971 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.941000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:53:23.943000 audit[6057]: NETFILTER_CFG table=nat:135 family=2 entries=15 op=nft_register_chain pid=6057 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:53:23.943000 audit[6057]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf2d392f0 a2=0 a3=7ffdf2d392dc items=0 ppid=5971 pid=6057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.943000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:53:23.949000 audit[6055]: NETFILTER_CFG table=raw:136 family=2 entries=21 op=nft_register_chain pid=6055 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:53:23.949000 audit[6055]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd99550380 a2=0 a3=7ffd9955036c items=0 ppid=5971 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.949000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:53:23.964000 audit[6058]: NETFILTER_CFG table=filter:137 family=2 entries=333 op=nft_register_chain pid=6058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:53:23.964000 audit[6058]: SYSCALL arch=c000003e syscall=46 success=yes exit=196320 a0=3 a1=7ffeeb6356e0 a2=0 a3=7ffeeb6356cc items=0 ppid=5971 pid=6058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:53:23.964000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:53:25.509806 systemd-networkd[2150]: vxlan.calico: Gained IPv6LL Jan 23 18:53:27.894038 containerd[2555]: time="2026-01-23T18:53:27.893402918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:53:28.158174 containerd[2555]: time="2026-01-23T18:53:28.158060102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:28.160787 containerd[2555]: time="2026-01-23T18:53:28.160756137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:53:28.160787 containerd[2555]: time="2026-01-23T18:53:28.160808867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:28.160956 kubelet[3990]: E0123 18:53:28.160908 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:28.161234 kubelet[3990]: E0123 18:53:28.160963 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:28.161234 kubelet[3990]: E0123 18:53:28.161182 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:81b9f55a93b74810ac86061c7b4e22d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:28.163389 containerd[2555]: time="2026-01-23T18:53:28.163258395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:53:28.411125 containerd[2555]: time="2026-01-23T18:53:28.411018018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:28.414567 containerd[2555]: time="2026-01-23T18:53:28.414537148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:53:28.414567 containerd[2555]: time="2026-01-23T18:53:28.414586276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:28.414744 kubelet[3990]: E0123 18:53:28.414696 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:28.414744 kubelet[3990]: E0123 18:53:28.414739 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:28.414943 kubelet[3990]: E0123 18:53:28.414875 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:28.416099 kubelet[3990]: E0123 18:53:28.416062 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:53:31.893426 containerd[2555]: time="2026-01-23T18:53:31.893018807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:53:32.155648 containerd[2555]: time="2026-01-23T18:53:32.155537217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:32.158046 containerd[2555]: time="2026-01-23T18:53:32.157994858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:53:32.158149 containerd[2555]: time="2026-01-23T18:53:32.158001633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:32.158242 kubelet[3990]: E0123 18:53:32.158192 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:32.158527 kubelet[3990]: E0123 18:53:32.158251 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:32.158556 kubelet[3990]: E0123 18:53:32.158473 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dq6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:32.159063 containerd[2555]: time="2026-01-23T18:53:32.158801532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:53:32.160270 kubelet[3990]: E0123 18:53:32.160235 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:32.426116 containerd[2555]: time="2026-01-23T18:53:32.426012996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:32.428749 containerd[2555]: time="2026-01-23T18:53:32.428705074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:53:32.428749 containerd[2555]: time="2026-01-23T18:53:32.428729755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:32.428985 kubelet[3990]: E0123 18:53:32.428957 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:53:32.429045 kubelet[3990]: E0123 18:53:32.428996 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:53:32.429308 kubelet[3990]: E0123 18:53:32.429235 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82cb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:32.429660 containerd[2555]: time="2026-01-23T18:53:32.429601999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:53:32.431251 kubelet[3990]: E0123 18:53:32.431206 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:32.684651 containerd[2555]: time="2026-01-23T18:53:32.684569093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:32.686994 containerd[2555]: time="2026-01-23T18:53:32.686957511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:53:32.686994 containerd[2555]: time="2026-01-23T18:53:32.687012046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:32.687152 kubelet[3990]: E0123 18:53:32.687119 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:32.687297 kubelet[3990]: E0123 18:53:32.687165 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:32.687333 kubelet[3990]: E0123 18:53:32.687296 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x9br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:32.688470 kubelet[3990]: E0123 18:53:32.688444 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:33.893254 containerd[2555]: time="2026-01-23T18:53:33.892846837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:53:34.167681 containerd[2555]: time="2026-01-23T18:53:34.167560427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:34.170319 containerd[2555]: time="2026-01-23T18:53:34.170287369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:53:34.170412 containerd[2555]: time="2026-01-23T18:53:34.170351321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:34.170527 kubelet[3990]: E0123 18:53:34.170452 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:34.170810 kubelet[3990]: E0123 18:53:34.170540 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:34.170810 kubelet[3990]: E0123 18:53:34.170670 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:34.172939 containerd[2555]: time="2026-01-23T18:53:34.172911996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:53:34.435708 containerd[2555]: time="2026-01-23T18:53:34.435581685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:34.438311 containerd[2555]: time="2026-01-23T18:53:34.438278228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:53:34.438311 containerd[2555]: time="2026-01-23T18:53:34.438331848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:34.438549 kubelet[3990]: E0123 18:53:34.438469 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:34.438608 kubelet[3990]: E0123 18:53:34.438563 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:34.438721 kubelet[3990]: E0123 18:53:34.438680 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:34.439965 kubelet[3990]: E0123 18:53:34.439903 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:35.892942 containerd[2555]: time="2026-01-23T18:53:35.892663544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:53:36.177063 containerd[2555]: time="2026-01-23T18:53:36.176931324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:36.179436 containerd[2555]: time="2026-01-23T18:53:36.179402540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:53:36.179583 containerd[2555]: time="2026-01-23T18:53:36.179418776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:36.179738 kubelet[3990]: E0123 18:53:36.179705 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:53:36.180101 kubelet[3990]: E0123 18:53:36.179749 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:53:36.180101 kubelet[3990]: E0123 18:53:36.179885 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndlkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:36.181368 kubelet[3990]: E0123 18:53:36.181331 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:43.894180 kubelet[3990]: E0123 18:53:43.893894 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:53:45.892656 kubelet[3990]: E0123 18:53:45.892610 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:53:45.894099 kubelet[3990]: E0123 18:53:45.894070 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:53:46.892115 kubelet[3990]: E0123 18:53:46.892048 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:46.892896 kubelet[3990]: E0123 18:53:46.892859 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:47.893206 kubelet[3990]: E0123 18:53:47.892894 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:53:56.892721 containerd[2555]: time="2026-01-23T18:53:56.892668730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:53:57.177740 containerd[2555]: time="2026-01-23T18:53:57.177009306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:57.182964 containerd[2555]: time="2026-01-23T18:53:57.182846964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:53:57.182964 containerd[2555]: time="2026-01-23T18:53:57.182932638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:57.183098 kubelet[3990]: E0123 18:53:57.183045 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:57.183098 kubelet[3990]: E0123 18:53:57.183090 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:53:57.183389 kubelet[3990]: E0123 18:53:57.183205 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:81b9f55a93b74810ac86061c7b4e22d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:57.185789 containerd[2555]: time="2026-01-23T18:53:57.185755446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:53:57.458058 containerd[2555]: time="2026-01-23T18:53:57.457654876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:57.461307 containerd[2555]: time="2026-01-23T18:53:57.461221578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:53:57.461419 containerd[2555]: time="2026-01-23T18:53:57.461248452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:57.461531 kubelet[3990]: E0123 18:53:57.461469 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:57.461599 kubelet[3990]: E0123 18:53:57.461545 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:53:57.461737 kubelet[3990]: E0123 18:53:57.461670 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:57.463203 kubelet[3990]: E0123 18:53:57.463151 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:53:57.894843 containerd[2555]: time="2026-01-23T18:53:57.894656825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:53:58.144462 containerd[2555]: time="2026-01-23T18:53:58.144382949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:58.147427 containerd[2555]: time="2026-01-23T18:53:58.147231228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:53:58.147609 containerd[2555]: time="2026-01-23T18:53:58.147261381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:58.147786 kubelet[3990]: E0123 18:53:58.147704 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:58.147786 kubelet[3990]: E0123 18:53:58.147770 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:53:58.148513 kubelet[3990]: E0123 18:53:58.148154 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x9br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:58.149143 containerd[2555]: time="2026-01-23T18:53:58.148848569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:53:58.149329 kubelet[3990]: E0123 18:53:58.149307 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:53:58.400445 containerd[2555]: time="2026-01-23T18:53:58.400318557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:58.403330 containerd[2555]: time="2026-01-23T18:53:58.403208971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:53:58.403330 containerd[2555]: time="2026-01-23T18:53:58.403302963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:58.403775 kubelet[3990]: E0123 18:53:58.403612 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:58.403775 kubelet[3990]: E0123 18:53:58.403755 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:53:58.406717 kubelet[3990]: E0123 18:53:58.404388 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:58.407717 containerd[2555]: time="2026-01-23T18:53:58.407663426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:53:58.666111 containerd[2555]: time="2026-01-23T18:53:58.665978752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:53:58.670178 containerd[2555]: time="2026-01-23T18:53:58.670106439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:53:58.670287 containerd[2555]: time="2026-01-23T18:53:58.670211497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:53:58.670470 kubelet[3990]: E0123 18:53:58.670408 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:58.670583 kubelet[3990]: E0123 18:53:58.670499 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:53:58.670748 kubelet[3990]: E0123 18:53:58.670665 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:53:58.671991 kubelet[3990]: E0123 18:53:58.671904 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:53:59.896513 containerd[2555]: time="2026-01-23T18:53:59.895106811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:54:00.169499 containerd[2555]: time="2026-01-23T18:54:00.169370983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:00.172006 containerd[2555]: time="2026-01-23T18:54:00.171964126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:54:00.172124 containerd[2555]: time="2026-01-23T18:54:00.172023882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:00.172212 kubelet[3990]: E0123 18:54:00.172159 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:54:00.172556 kubelet[3990]: E0123 18:54:00.172222 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:54:00.172556 kubelet[3990]: E0123 18:54:00.172437 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndlkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:00.173111 containerd[2555]: time="2026-01-23T18:54:00.173082046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:54:00.174593 kubelet[3990]: E0123 18:54:00.174561 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:54:00.430891 containerd[2555]: time="2026-01-23T18:54:00.430494474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:00.433016 containerd[2555]: time="2026-01-23T18:54:00.432922370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:54:00.433159 containerd[2555]: time="2026-01-23T18:54:00.432973243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:00.433437 kubelet[3990]: E0123 18:54:00.433392 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:54:00.433634 kubelet[3990]: E0123 18:54:00.433513 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:54:00.434382 containerd[2555]: time="2026-01-23T18:54:00.434196981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:54:00.434627 kubelet[3990]: E0123 18:54:00.434285 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82cb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:00.436084 kubelet[3990]: E0123 18:54:00.436055 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:54:00.696621 containerd[2555]: time="2026-01-23T18:54:00.696529335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:00.698992 containerd[2555]: time="2026-01-23T18:54:00.698948282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:54:00.698992 containerd[2555]: time="2026-01-23T18:54:00.698975345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:00.699168 kubelet[3990]: E0123 18:54:00.699123 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:00.699209 kubelet[3990]: E0123 18:54:00.699175 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:00.699491 kubelet[3990]: E0123 18:54:00.699313 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dq6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:00.700787 kubelet[3990]: E0123 18:54:00.700756 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:54:10.893355 kubelet[3990]: E0123 18:54:10.892728 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:54:10.894124 kubelet[3990]: E0123 18:54:10.894089 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:54:10.895992 kubelet[3990]: E0123 18:54:10.895947 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:54:11.895204 kubelet[3990]: E0123 18:54:11.895113 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:54:12.893794 kubelet[3990]: E0123 18:54:12.893750 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:54:13.892789 kubelet[3990]: E0123 18:54:13.892739 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:54:18.621560 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 23 18:54:18.621708 kernel: audit: type=1130 audit(1769194458.618:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:42372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:18.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:42372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:18.618783 systemd[1]: Started sshd@7-10.200.8.14:22-10.200.16.10:42372.service - OpenSSH per-connection server daemon (10.200.16.10:42372). Jan 23 18:54:19.180075 sshd[6189]: Accepted publickey for core from 10.200.16.10 port 42372 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:19.179000 audit[6189]: USER_ACCT pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.189523 kernel: audit: type=1101 audit(1769194459.179:762): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.190828 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:19.189000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.202943 kernel: audit: type=1103 audit(1769194459.189:763): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.203008 kernel: audit: type=1006 audit(1769194459.189:764): pid=6189 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 18:54:19.189000 audit[6189]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe00fa09f0 a2=3 a3=0 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:19.189000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:19.215754 kernel: audit: type=1300 audit(1769194459.189:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe00fa09f0 a2=3 a3=0 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:19.215815 kernel: audit: type=1327 audit(1769194459.189:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:19.216901 systemd-logind[2501]: New session 11 of user core. Jan 23 18:54:19.221652 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:54:19.224000 audit[6189]: USER_START pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.238495 kernel: audit: type=1105 audit(1769194459.224:765): pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.235000 audit[6193]: CRED_ACQ pid=6193 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.249508 kernel: audit: type=1103 audit(1769194459.235:766): pid=6193 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.586788 sshd[6193]: Connection closed by 10.200.16.10 port 42372 Jan 23 18:54:19.588552 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:19.589000 audit[6189]: USER_END pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.601604 kernel: audit: type=1106 audit(1769194459.589:767): pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.601675 kernel: audit: type=1104 audit(1769194459.589:768): pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.589000 audit[6189]: CRED_DISP pid=6189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:19.597790 systemd[1]: sshd@7-10.200.8.14:22-10.200.16.10:42372.service: Deactivated successfully. Jan 23 18:54:19.601601 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:54:19.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:42372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:19.602516 systemd-logind[2501]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:54:19.604167 systemd-logind[2501]: Removed session 11. Jan 23 18:54:22.892844 kubelet[3990]: E0123 18:54:22.892791 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:54:23.898700 kubelet[3990]: E0123 18:54:23.898651 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:54:24.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:45840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:24.704835 systemd[1]: Started sshd@8-10.200.8.14:22-10.200.16.10:45840.service - OpenSSH per-connection server daemon (10.200.16.10:45840). Jan 23 18:54:24.706500 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:54:24.706561 kernel: audit: type=1130 audit(1769194464.704:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:45840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:24.893301 kubelet[3990]: E0123 18:54:24.892959 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:54:25.282000 audit[6207]: USER_ACCT pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.288496 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 45840 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:25.288830 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:25.291518 kernel: audit: type=1101 audit(1769194465.282:771): pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.287000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.300512 kernel: audit: type=1103 audit(1769194465.287:772): pid=6207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.300582 kernel: audit: type=1006 audit(1769194465.287:773): pid=6207 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 18:54:25.287000 audit[6207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd75855eb0 a2=3 a3=0 items=0 ppid=1 pid=6207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:25.306671 kernel: audit: type=1300 audit(1769194465.287:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd75855eb0 a2=3 a3=0 items=0 ppid=1 pid=6207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:25.287000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:25.310499 kernel: audit: type=1327 audit(1769194465.287:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:25.310226 systemd-logind[2501]: New session 12 of user core. Jan 23 18:54:25.314649 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:54:25.316000 audit[6207]: USER_START pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.319000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.329002 kernel: audit: type=1105 audit(1769194465.316:774): pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.329059 kernel: audit: type=1103 audit(1769194465.319:775): pid=6211 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.722010 sshd[6211]: Connection closed by 10.200.16.10 port 45840 Jan 23 18:54:25.722834 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:25.723000 audit[6207]: USER_END pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.736516 kernel: audit: type=1106 audit(1769194465.723:776): pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.735000 audit[6207]: CRED_DISP pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.740468 systemd[1]: sshd@8-10.200.8.14:22-10.200.16.10:45840.service: Deactivated successfully. Jan 23 18:54:25.742680 kernel: audit: type=1104 audit(1769194465.735:777): pid=6207 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:25.743444 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:54:25.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:45840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:25.746420 systemd-logind[2501]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:54:25.748623 systemd-logind[2501]: Removed session 12. Jan 23 18:54:25.894359 kubelet[3990]: E0123 18:54:25.894325 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:54:26.893066 kubelet[3990]: E0123 18:54:26.892718 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:54:26.894520 kubelet[3990]: E0123 18:54:26.894157 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:54:30.837463 systemd[1]: Started sshd@9-10.200.8.14:22-10.200.16.10:37486.service - OpenSSH per-connection server daemon (10.200.16.10:37486). Jan 23 18:54:30.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:37486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:30.840867 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:54:30.841020 kernel: audit: type=1130 audit(1769194470.836:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:37486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:31.403000 audit[6223]: USER_ACCT pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.405217 sshd[6223]: Accepted publickey for core from 10.200.16.10 port 37486 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:31.407508 sshd-session[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:31.405000 audit[6223]: CRED_ACQ pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.416704 kernel: audit: type=1101 audit(1769194471.403:780): pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.416776 kernel: audit: type=1103 audit(1769194471.405:781): pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.417805 systemd-logind[2501]: New session 13 of user core. Jan 23 18:54:31.421954 kernel: audit: type=1006 audit(1769194471.405:782): pid=6223 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 18:54:31.405000 audit[6223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb0356b80 a2=3 a3=0 items=0 ppid=1 pid=6223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:31.429113 kernel: audit: type=1300 audit(1769194471.405:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb0356b80 a2=3 a3=0 items=0 ppid=1 pid=6223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:31.429800 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:54:31.405000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:31.432000 audit[6223]: USER_START pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.441963 kernel: audit: type=1327 audit(1769194471.405:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:31.442024 kernel: audit: type=1105 audit(1769194471.432:783): pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.435000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.447346 kernel: audit: type=1103 audit(1769194471.435:784): pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.797578 sshd[6227]: Connection closed by 10.200.16.10 port 37486 Jan 23 18:54:31.798469 sshd-session[6223]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:31.800000 audit[6223]: USER_END pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.805303 systemd[1]: sshd@9-10.200.8.14:22-10.200.16.10:37486.service: Deactivated successfully. Jan 23 18:54:31.805686 systemd-logind[2501]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:54:31.808581 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:54:31.812734 kernel: audit: type=1106 audit(1769194471.800:785): pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.812447 systemd-logind[2501]: Removed session 13. Jan 23 18:54:31.800000 audit[6223]: CRED_DISP pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:37486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:31.820215 kernel: audit: type=1104 audit(1769194471.800:786): pid=6223 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:31.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.14:22-10.200.16.10:37496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:31.919753 systemd[1]: Started sshd@10-10.200.8.14:22-10.200.16.10:37496.service - OpenSSH per-connection server daemon (10.200.16.10:37496). Jan 23 18:54:32.500000 audit[6240]: USER_ACCT pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.504135 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 37496 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:32.503000 audit[6240]: CRED_ACQ pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.503000 audit[6240]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2fc4a6c0 a2=3 a3=0 items=0 ppid=1 pid=6240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:32.503000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:32.505866 sshd-session[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:32.513046 systemd-logind[2501]: New session 14 of user core. Jan 23 18:54:32.518764 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:54:32.523000 audit[6240]: USER_START pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.526000 audit[6244]: CRED_ACQ pid=6244 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.899940 sshd[6244]: Connection closed by 10.200.16.10 port 37496 Jan 23 18:54:32.900212 sshd-session[6240]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:32.900000 audit[6240]: USER_END pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.900000 audit[6240]: CRED_DISP pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:32.904524 systemd-logind[2501]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:54:32.904987 systemd[1]: sshd@10-10.200.8.14:22-10.200.16.10:37496.service: Deactivated successfully. Jan 23 18:54:32.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.14:22-10.200.16.10:37496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:32.906985 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:54:32.908770 systemd-logind[2501]: Removed session 14. Jan 23 18:54:33.016524 systemd[1]: Started sshd@11-10.200.8.14:22-10.200.16.10:37498.service - OpenSSH per-connection server daemon (10.200.16.10:37498). Jan 23 18:54:33.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.14:22-10.200.16.10:37498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:33.580000 audit[6254]: USER_ACCT pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.582577 sshd[6254]: Accepted publickey for core from 10.200.16.10 port 37498 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:33.581000 audit[6254]: CRED_ACQ pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.582000 audit[6254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbf13ebe0 a2=3 a3=0 items=0 ppid=1 pid=6254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:33.582000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:33.584338 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:33.588951 systemd-logind[2501]: New session 15 of user core. Jan 23 18:54:33.595806 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:54:33.598000 audit[6254]: USER_START pid=6254 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.600000 audit[6258]: CRED_ACQ pid=6258 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.969324 sshd[6258]: Connection closed by 10.200.16.10 port 37498 Jan 23 18:54:33.972632 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:33.973000 audit[6254]: USER_END pid=6254 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.973000 audit[6254]: CRED_DISP pid=6254 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:33.977822 systemd[1]: sshd@11-10.200.8.14:22-10.200.16.10:37498.service: Deactivated successfully. Jan 23 18:54:33.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.14:22-10.200.16.10:37498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:33.982232 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:54:33.984437 systemd-logind[2501]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:54:33.985755 systemd-logind[2501]: Removed session 15. Jan 23 18:54:34.474134 update_engine[2502]: I20260123 18:54:34.474069 2502 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 18:54:34.474134 update_engine[2502]: I20260123 18:54:34.474128 2502 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 18:54:34.476592 update_engine[2502]: I20260123 18:54:34.474340 2502 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 18:54:34.476859 update_engine[2502]: I20260123 18:54:34.476831 2502 omaha_request_params.cc:62] Current group set to beta Jan 23 18:54:34.476988 update_engine[2502]: I20260123 18:54:34.476974 2502 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 18:54:34.477034 update_engine[2502]: I20260123 18:54:34.477025 2502 update_attempter.cc:643] Scheduling an action processor start. Jan 23 18:54:34.477506 update_engine[2502]: I20260123 18:54:34.477083 2502 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 18:54:34.477506 update_engine[2502]: I20260123 18:54:34.477133 2502 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 18:54:34.477506 update_engine[2502]: I20260123 18:54:34.477187 2502 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 18:54:34.477506 update_engine[2502]: I20260123 18:54:34.477192 2502 omaha_request_action.cc:272] Request: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: Jan 23 18:54:34.477506 update_engine[2502]: I20260123 18:54:34.477197 2502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:54:34.477813 locksmithd[2594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 18:54:34.478194 update_engine[2502]: I20260123 18:54:34.478168 2502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:54:34.478673 update_engine[2502]: I20260123 18:54:34.478646 2502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:54:34.516198 update_engine[2502]: E20260123 18:54:34.516163 2502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:54:34.516276 update_engine[2502]: I20260123 18:54:34.516239 2502 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 18:54:34.892177 kubelet[3990]: E0123 18:54:34.892121 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:54:36.893497 kubelet[3990]: E0123 18:54:36.893251 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:54:36.894280 kubelet[3990]: E0123 18:54:36.894168 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:54:36.894544 kubelet[3990]: E0123 18:54:36.894461 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:54:38.892512 kubelet[3990]: E0123 18:54:38.892339 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:54:38.893301 containerd[2555]: time="2026-01-23T18:54:38.892841558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:54:39.093296 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 18:54:39.093385 kernel: audit: type=1130 audit(1769194479.086:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:37512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:39.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:37512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:39.086506 systemd[1]: Started sshd@12-10.200.8.14:22-10.200.16.10:37512.service - OpenSSH per-connection server daemon (10.200.16.10:37512). Jan 23 18:54:39.151281 containerd[2555]: time="2026-01-23T18:54:39.151183299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:39.153595 containerd[2555]: time="2026-01-23T18:54:39.153561242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:54:39.153776 containerd[2555]: time="2026-01-23T18:54:39.153639932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:39.153810 kubelet[3990]: E0123 18:54:39.153772 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:54:39.153848 kubelet[3990]: E0123 18:54:39.153821 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:54:39.153967 kubelet[3990]: E0123 18:54:39.153931 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:39.156031 containerd[2555]: time="2026-01-23T18:54:39.155950006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:54:39.399831 containerd[2555]: time="2026-01-23T18:54:39.399697470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:39.402446 containerd[2555]: time="2026-01-23T18:54:39.402073549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:54:39.402446 containerd[2555]: time="2026-01-23T18:54:39.402091295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:39.402950 kubelet[3990]: E0123 18:54:39.402261 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:54:39.402950 kubelet[3990]: E0123 18:54:39.402301 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:54:39.402950 kubelet[3990]: E0123 18:54:39.402626 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-slbmv_calico-system(ad1b7350-c4c8-43d5-adb7-51075adcd4fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:39.403832 kubelet[3990]: E0123 18:54:39.403801 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:54:39.647192 sshd[6276]: Accepted publickey for core from 10.200.16.10 port 37512 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:39.646000 audit[6276]: USER_ACCT pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.655267 kernel: audit: type=1101 audit(1769194479.646:807): pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.655131 sshd-session[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:39.653000 audit[6276]: CRED_ACQ pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.666502 kernel: audit: type=1103 audit(1769194479.653:808): pid=6276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.667821 systemd-logind[2501]: New session 16 of user core. Jan 23 18:54:39.673494 kernel: audit: type=1006 audit(1769194479.653:809): pid=6276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 18:54:39.673567 kernel: audit: type=1300 audit(1769194479.653:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce13d1210 a2=3 a3=0 items=0 ppid=1 pid=6276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:39.653000 audit[6276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce13d1210 a2=3 a3=0 items=0 ppid=1 pid=6276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:39.684694 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:54:39.653000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:39.690899 kernel: audit: type=1327 audit(1769194479.653:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:39.693000 audit[6276]: USER_START pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.702011 kernel: audit: type=1105 audit(1769194479.693:810): pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.694000 audit[6280]: CRED_ACQ pid=6280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:39.709499 kernel: audit: type=1103 audit(1769194479.694:811): pid=6280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:40.039520 sshd[6280]: Connection closed by 10.200.16.10 port 37512 Jan 23 18:54:40.041079 sshd-session[6276]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:40.042000 audit[6276]: USER_END pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:40.049607 kernel: audit: type=1106 audit(1769194480.042:812): pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:40.049219 systemd-logind[2501]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:54:40.051280 systemd[1]: sshd@12-10.200.8.14:22-10.200.16.10:37512.service: Deactivated successfully. Jan 23 18:54:40.042000 audit[6276]: CRED_DISP pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:40.057741 kernel: audit: type=1104 audit(1769194480.042:813): pid=6276 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:40.057443 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:54:40.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:37512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:40.061376 systemd-logind[2501]: Removed session 16. Jan 23 18:54:44.478078 update_engine[2502]: I20260123 18:54:44.477586 2502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:54:44.478078 update_engine[2502]: I20260123 18:54:44.477682 2502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:54:44.478078 update_engine[2502]: I20260123 18:54:44.478020 2502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:54:44.514795 update_engine[2502]: E20260123 18:54:44.514674 2502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:54:44.514795 update_engine[2502]: I20260123 18:54:44.514770 2502 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 18:54:45.157515 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:54:45.157629 kernel: audit: type=1130 audit(1769194485.154:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:45.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:45.156071 systemd[1]: Started sshd@13-10.200.8.14:22-10.200.16.10:49366.service - OpenSSH per-connection server daemon (10.200.16.10:49366). Jan 23 18:54:45.722000 audit[6330]: USER_ACCT pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.726119 sshd-session[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:45.728929 sshd[6330]: Accepted publickey for core from 10.200.16.10 port 49366 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:45.722000 audit[6330]: CRED_ACQ pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.733718 systemd-logind[2501]: New session 17 of user core. Jan 23 18:54:45.736748 kernel: audit: type=1101 audit(1769194485.722:816): pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.736810 kernel: audit: type=1103 audit(1769194485.722:817): pid=6330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.740538 kernel: audit: type=1006 audit(1769194485.722:818): pid=6330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 18:54:45.722000 audit[6330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7c602a70 a2=3 a3=0 items=0 ppid=1 pid=6330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:45.745550 kernel: audit: type=1300 audit(1769194485.722:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7c602a70 a2=3 a3=0 items=0 ppid=1 pid=6330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:45.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:45.748275 kernel: audit: type=1327 audit(1769194485.722:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:45.748709 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:54:45.749000 audit[6330]: USER_START pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.751000 audit[6334]: CRED_ACQ pid=6334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.761936 kernel: audit: type=1105 audit(1769194485.749:819): pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:45.761976 kernel: audit: type=1103 audit(1769194485.751:820): pid=6334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:46.113501 sshd[6334]: Connection closed by 10.200.16.10 port 49366 Jan 23 18:54:46.115635 sshd-session[6330]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:46.116000 audit[6330]: USER_END pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:46.130724 systemd-logind[2501]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:54:46.131666 kernel: audit: type=1106 audit(1769194486.116:821): pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:46.132381 systemd[1]: sshd@13-10.200.8.14:22-10.200.16.10:49366.service: Deactivated successfully. Jan 23 18:54:46.116000 audit[6330]: CRED_DISP pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:46.136775 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:54:46.144094 systemd-logind[2501]: Removed session 17. Jan 23 18:54:46.145501 kernel: audit: type=1104 audit(1769194486.116:822): pid=6330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:46.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:46.892160 containerd[2555]: time="2026-01-23T18:54:46.891845632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:54:47.163714 containerd[2555]: time="2026-01-23T18:54:47.163600567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:47.169830 containerd[2555]: time="2026-01-23T18:54:47.169800455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:54:47.169898 containerd[2555]: time="2026-01-23T18:54:47.169874854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:47.170023 kubelet[3990]: E0123 18:54:47.169989 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:54:47.170584 kubelet[3990]: E0123 18:54:47.170036 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:54:47.170584 kubelet[3990]: E0123 18:54:47.170146 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:81b9f55a93b74810ac86061c7b4e22d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:47.172669 containerd[2555]: time="2026-01-23T18:54:47.172632224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:54:47.450922 containerd[2555]: time="2026-01-23T18:54:47.450812646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:47.453159 containerd[2555]: time="2026-01-23T18:54:47.453121888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:54:47.453249 containerd[2555]: time="2026-01-23T18:54:47.453188862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:47.453325 kubelet[3990]: E0123 18:54:47.453282 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:54:47.453379 kubelet[3990]: E0123 18:54:47.453333 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:54:47.453523 kubelet[3990]: E0123 18:54:47.453453 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwc6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-774b9649d4-hsh9h_calico-system(7886516f-3341-4184-8abc-3d16d954f0c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:47.454979 kubelet[3990]: E0123 18:54:47.454920 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:54:47.897691 containerd[2555]: time="2026-01-23T18:54:47.897649513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:54:48.161025 containerd[2555]: time="2026-01-23T18:54:48.160774203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:48.163326 containerd[2555]: time="2026-01-23T18:54:48.163283322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:54:48.163555 containerd[2555]: time="2026-01-23T18:54:48.163377933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:48.163679 kubelet[3990]: E0123 18:54:48.163632 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:54:48.163728 kubelet[3990]: E0123 18:54:48.163688 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:54:48.163933 kubelet[3990]: E0123 18:54:48.163845 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndlkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6658c89489-trg8d_calico-system(0fe1ccdb-f11d-478d-b8c5-50e7a678ae44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:48.165033 kubelet[3990]: E0123 18:54:48.164995 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:54:49.892999 containerd[2555]: time="2026-01-23T18:54:49.892930104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:54:50.152113 containerd[2555]: time="2026-01-23T18:54:50.151987485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:50.154473 containerd[2555]: time="2026-01-23T18:54:50.154438270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:54:50.154576 containerd[2555]: time="2026-01-23T18:54:50.154541281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:50.154736 kubelet[3990]: E0123 18:54:50.154704 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:50.155056 kubelet[3990]: E0123 18:54:50.154749 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:50.155056 kubelet[3990]: E0123 18:54:50.154916 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dq6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-f4rb7_calico-apiserver(c7f09343-3d0b-4264-987b-68763f2830ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:50.156472 kubelet[3990]: E0123 18:54:50.156416 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:54:50.892954 containerd[2555]: time="2026-01-23T18:54:50.892664996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:54:51.170565 containerd[2555]: time="2026-01-23T18:54:51.169927842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:51.172657 containerd[2555]: time="2026-01-23T18:54:51.172595473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:54:51.172657 containerd[2555]: time="2026-01-23T18:54:51.172629804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:51.174682 kubelet[3990]: E0123 18:54:51.174633 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:51.174972 kubelet[3990]: E0123 18:54:51.174694 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:54:51.174972 kubelet[3990]: E0123 18:54:51.174825 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x9br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8686dc9b89-kzk6x_calico-apiserver(c2f1acaa-9237-4a56-b34a-eb28ae8b7529): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:51.176293 kubelet[3990]: E0123 18:54:51.176262 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:54:51.233431 systemd[1]: Started sshd@14-10.200.8.14:22-10.200.16.10:45398.service - OpenSSH per-connection server daemon (10.200.16.10:45398). Jan 23 18:54:51.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:45398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:51.237045 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:54:51.237125 kernel: audit: type=1130 audit(1769194491.232:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:45398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:51.814000 audit[6346]: USER_ACCT pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.820616 kernel: audit: type=1101 audit(1769194491.814:825): pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.819702 sshd-session[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:51.820949 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 45398 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:51.826550 kernel: audit: type=1103 audit(1769194491.817:826): pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.817000 audit[6346]: CRED_ACQ pid=6346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.830494 kernel: audit: type=1006 audit(1769194491.817:827): pid=6346 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 18:54:51.817000 audit[6346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5895b460 a2=3 a3=0 items=0 ppid=1 pid=6346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:51.836497 kernel: audit: type=1300 audit(1769194491.817:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5895b460 a2=3 a3=0 items=0 ppid=1 pid=6346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:51.817000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:51.839511 kernel: audit: type=1327 audit(1769194491.817:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:51.843382 systemd-logind[2501]: New session 18 of user core. Jan 23 18:54:51.846660 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:54:51.850000 audit[6346]: USER_START pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.859041 kernel: audit: type=1105 audit(1769194491.850:828): pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.857000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:51.868577 kernel: audit: type=1103 audit(1769194491.857:829): pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.190985 sshd[6350]: Connection closed by 10.200.16.10 port 45398 Jan 23 18:54:52.192607 sshd-session[6346]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:52.193000 audit[6346]: USER_END pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.195756 systemd[1]: sshd@14-10.200.8.14:22-10.200.16.10:45398.service: Deactivated successfully. Jan 23 18:54:52.198124 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:54:52.201470 systemd-logind[2501]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:54:52.202557 systemd-logind[2501]: Removed session 18. Jan 23 18:54:52.193000 audit[6346]: CRED_DISP pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.207065 kernel: audit: type=1106 audit(1769194492.193:830): pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.207186 kernel: audit: type=1104 audit(1769194492.193:831): pid=6346 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:45398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:52.303541 systemd[1]: Started sshd@15-10.200.8.14:22-10.200.16.10:45412.service - OpenSSH per-connection server daemon (10.200.16.10:45412). Jan 23 18:54:52.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.14:22-10.200.16.10:45412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:52.862000 audit[6362]: USER_ACCT pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.863668 sshd[6362]: Accepted publickey for core from 10.200.16.10 port 45412 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:52.863000 audit[6362]: CRED_ACQ pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.863000 audit[6362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc593c9650 a2=3 a3=0 items=0 ppid=1 pid=6362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:52.863000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:52.865296 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:52.870652 systemd-logind[2501]: New session 19 of user core. Jan 23 18:54:52.877791 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:54:52.880000 audit[6362]: USER_START pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.882000 audit[6366]: CRED_ACQ pid=6366 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:52.893908 kubelet[3990]: E0123 18:54:52.893869 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:54:53.296067 sshd[6366]: Connection closed by 10.200.16.10 port 45412 Jan 23 18:54:53.296667 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:53.297000 audit[6362]: USER_END pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:53.297000 audit[6362]: CRED_DISP pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:53.300525 systemd[1]: sshd@15-10.200.8.14:22-10.200.16.10:45412.service: Deactivated successfully. Jan 23 18:54:53.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.14:22-10.200.16.10:45412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:53.302513 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:54:53.303301 systemd-logind[2501]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:54:53.305170 systemd-logind[2501]: Removed session 19. Jan 23 18:54:53.412261 systemd[1]: Started sshd@16-10.200.8.14:22-10.200.16.10:45426.service - OpenSSH per-connection server daemon (10.200.16.10:45426). Jan 23 18:54:53.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.14:22-10.200.16.10:45426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:53.898593 containerd[2555]: time="2026-01-23T18:54:53.898510953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:54:53.968000 audit[6376]: USER_ACCT pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:53.970200 sshd[6376]: Accepted publickey for core from 10.200.16.10 port 45426 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:53.971000 audit[6376]: CRED_ACQ pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:53.971000 audit[6376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda7fd3310 a2=3 a3=0 items=0 ppid=1 pid=6376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:53.971000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:53.974106 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:53.980868 systemd-logind[2501]: New session 20 of user core. Jan 23 18:54:53.987120 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:54:53.991000 audit[6376]: USER_START pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:53.993000 audit[6385]: CRED_ACQ pid=6385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:54.175074 containerd[2555]: time="2026-01-23T18:54:54.174855617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:54:54.177644 containerd[2555]: time="2026-01-23T18:54:54.177528669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:54:54.177644 containerd[2555]: time="2026-01-23T18:54:54.177618592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:54:54.179006 kubelet[3990]: E0123 18:54:54.177904 3990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:54:54.179797 kubelet[3990]: E0123 18:54:54.179404 3990 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:54:54.179797 kubelet[3990]: E0123 18:54:54.179617 3990 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82cb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22mgp_calico-system(12936b13-6ad9-4c1b-a913-2f3039ac097a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:54:54.181397 kubelet[3990]: E0123 18:54:54.181341 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:54:54.474056 update_engine[2502]: I20260123 18:54:54.473929 2502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:54:54.474056 update_engine[2502]: I20260123 18:54:54.474013 2502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:54:54.474413 update_engine[2502]: I20260123 18:54:54.474370 2502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:54:54.561975 update_engine[2502]: E20260123 18:54:54.561842 2502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:54:54.561975 update_engine[2502]: I20260123 18:54:54.561944 2502 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 18:54:54.979000 audit[6397]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=6397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:54:54.979000 audit[6397]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffde7d417c0 a2=0 a3=7ffde7d417ac items=0 ppid=4095 pid=6397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:54.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:54:54.991000 audit[6397]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=6397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:54:54.991000 audit[6397]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffde7d417c0 a2=0 a3=0 items=0 ppid=4095 pid=6397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:54.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:54:55.010000 audit[6399]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=6399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:54:55.010000 audit[6399]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff4a3ac040 a2=0 a3=7fff4a3ac02c items=0 ppid=4095 pid=6399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:55.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:54:55.014000 audit[6399]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=6399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:54:55.014000 audit[6399]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff4a3ac040 a2=0 a3=0 items=0 ppid=4095 pid=6399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:55.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:54:55.106237 sshd[6385]: Connection closed by 10.200.16.10 port 45426 Jan 23 18:54:55.107210 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:55.108000 audit[6376]: USER_END pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:55.108000 audit[6376]: CRED_DISP pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:55.112033 systemd[1]: sshd@16-10.200.8.14:22-10.200.16.10:45426.service: Deactivated successfully. Jan 23 18:54:55.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.14:22-10.200.16.10:45426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:55.116167 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:54:55.117346 systemd-logind[2501]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:54:55.119753 systemd-logind[2501]: Removed session 20. Jan 23 18:54:55.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.14:22-10.200.16.10:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:55.222164 systemd[1]: Started sshd@17-10.200.8.14:22-10.200.16.10:45436.service - OpenSSH per-connection server daemon (10.200.16.10:45436). Jan 23 18:54:55.782000 audit[6404]: USER_ACCT pid=6404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:55.783513 sshd[6404]: Accepted publickey for core from 10.200.16.10 port 45436 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:55.783000 audit[6404]: CRED_ACQ pid=6404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:55.784000 audit[6404]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd447d77b0 a2=3 a3=0 items=0 ppid=1 pid=6404 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:55.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:55.785534 sshd-session[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:55.792267 systemd-logind[2501]: New session 21 of user core. Jan 23 18:54:55.796667 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:54:55.800000 audit[6404]: USER_START pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:55.802000 audit[6408]: CRED_ACQ pid=6408 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:56.343861 sshd[6408]: Connection closed by 10.200.16.10 port 45436 Jan 23 18:54:56.345171 sshd-session[6404]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:56.352787 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 23 18:54:56.352889 kernel: audit: type=1106 audit(1769194496.347:861): pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:56.347000 audit[6404]: USER_END pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:56.353846 systemd[1]: sshd@17-10.200.8.14:22-10.200.16.10:45436.service: Deactivated successfully. Jan 23 18:54:56.356829 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:54:56.358596 systemd-logind[2501]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:54:56.360103 kernel: audit: type=1104 audit(1769194496.347:862): pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:56.347000 audit[6404]: CRED_DISP pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:56.364463 systemd-logind[2501]: Removed session 21. Jan 23 18:54:56.369635 kernel: audit: type=1131 audit(1769194496.349:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.14:22-10.200.16.10:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:56.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.14:22-10.200.16.10:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:56.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:45440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:56.457493 systemd[1]: Started sshd@18-10.200.8.14:22-10.200.16.10:45440.service - OpenSSH per-connection server daemon (10.200.16.10:45440). Jan 23 18:54:56.463556 kernel: audit: type=1130 audit(1769194496.457:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:45440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:57.019000 audit[6418]: USER_ACCT pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.024297 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:54:57.022000 audit[6418]: CRED_ACQ pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.025041 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 45440 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:54:57.026188 kernel: audit: type=1101 audit(1769194497.019:865): pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.026269 kernel: audit: type=1103 audit(1769194497.022:866): pid=6418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.035148 kernel: audit: type=1006 audit(1769194497.022:867): pid=6418 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 23 18:54:57.043300 kernel: audit: type=1300 audit(1769194497.022:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3a3404a0 a2=3 a3=0 items=0 ppid=1 pid=6418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:57.022000 audit[6418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3a3404a0 a2=3 a3=0 items=0 ppid=1 pid=6418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:54:57.043543 systemd-logind[2501]: New session 22 of user core. Jan 23 18:54:57.022000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:57.046504 kernel: audit: type=1327 audit(1769194497.022:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:54:57.051668 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:54:57.053000 audit[6418]: USER_START pid=6418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.053000 audit[6422]: CRED_ACQ pid=6422 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.062495 kernel: audit: type=1105 audit(1769194497.053:868): pid=6418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.414640 sshd[6422]: Connection closed by 10.200.16.10 port 45440 Jan 23 18:54:57.415441 sshd-session[6418]: pam_unix(sshd:session): session closed for user core Jan 23 18:54:57.416000 audit[6418]: USER_END pid=6418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.416000 audit[6418]: CRED_DISP pid=6418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:54:57.419527 systemd-logind[2501]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:54:57.420750 systemd[1]: sshd@18-10.200.8.14:22-10.200.16.10:45440.service: Deactivated successfully. Jan 23 18:54:57.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:45440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:54:57.423735 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:54:57.426937 systemd-logind[2501]: Removed session 22. Jan 23 18:55:00.009000 audit[6434]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:55:00.009000 audit[6434]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1f62d6d0 a2=0 a3=7ffc1f62d6bc items=0 ppid=4095 pid=6434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:00.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:55:00.015000 audit[6434]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=6434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:55:00.015000 audit[6434]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc1f62d6d0 a2=0 a3=7ffc1f62d6bc items=0 ppid=4095 pid=6434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:00.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:55:01.897624 kubelet[3990]: E0123 18:55:01.897574 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:55:01.899844 kubelet[3990]: E0123 18:55:01.899811 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:55:02.533627 systemd[1]: Started sshd@19-10.200.8.14:22-10.200.16.10:42442.service - OpenSSH per-connection server daemon (10.200.16.10:42442). Jan 23 18:55:02.540560 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 23 18:55:02.540625 kernel: audit: type=1130 audit(1769194502.532:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:02.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:02.893933 kubelet[3990]: E0123 18:55:02.893365 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:55:03.098000 audit[6450]: USER_ACCT pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.109509 kernel: audit: type=1101 audit(1769194503.098:876): pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.109696 sshd[6450]: Accepted publickey for core from 10.200.16.10 port 42442 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:55:03.111156 sshd-session[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:55:03.108000 audit[6450]: CRED_ACQ pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.122580 kernel: audit: type=1103 audit(1769194503.108:877): pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.127501 kernel: audit: type=1006 audit(1769194503.108:878): pid=6450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 18:55:03.127496 systemd-logind[2501]: New session 23 of user core. Jan 23 18:55:03.108000 audit[6450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff912bc170 a2=3 a3=0 items=0 ppid=1 pid=6450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:03.135496 kernel: audit: type=1300 audit(1769194503.108:878): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff912bc170 a2=3 a3=0 items=0 ppid=1 pid=6450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:03.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:03.138500 kernel: audit: type=1327 audit(1769194503.108:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:03.138661 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:55:03.148806 kernel: audit: type=1105 audit(1769194503.140:879): pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.140000 audit[6450]: USER_START pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.147000 audit[6454]: CRED_ACQ pid=6454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.157498 kernel: audit: type=1103 audit(1769194503.147:880): pid=6454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.470145 sshd[6454]: Connection closed by 10.200.16.10 port 42442 Jan 23 18:55:03.470802 sshd-session[6450]: pam_unix(sshd:session): session closed for user core Jan 23 18:55:03.470000 audit[6450]: USER_END pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.479087 systemd[1]: sshd@19-10.200.8.14:22-10.200.16.10:42442.service: Deactivated successfully. Jan 23 18:55:03.482511 kernel: audit: type=1106 audit(1769194503.470:881): pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.482697 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:55:03.484624 systemd-logind[2501]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:55:03.471000 audit[6450]: CRED_DISP pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:03.489344 systemd-logind[2501]: Removed session 23. Jan 23 18:55:03.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:03.494643 kernel: audit: type=1104 audit(1769194503.471:882): pid=6450 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:04.474331 update_engine[2502]: I20260123 18:55:04.474269 2502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:55:04.474716 update_engine[2502]: I20260123 18:55:04.474356 2502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:55:04.474752 update_engine[2502]: I20260123 18:55:04.474735 2502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:55:04.500056 update_engine[2502]: E20260123 18:55:04.500013 2502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:55:04.500176 update_engine[2502]: I20260123 18:55:04.500102 2502 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:55:04.500176 update_engine[2502]: I20260123 18:55:04.500113 2502 omaha_request_action.cc:617] Omaha request response: Jan 23 18:55:04.500221 update_engine[2502]: E20260123 18:55:04.500188 2502 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 18:55:04.500221 update_engine[2502]: I20260123 18:55:04.500204 2502 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 18:55:04.500221 update_engine[2502]: I20260123 18:55:04.500209 2502 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:55:04.500221 update_engine[2502]: I20260123 18:55:04.500214 2502 update_attempter.cc:306] Processing Done. Jan 23 18:55:04.500305 update_engine[2502]: E20260123 18:55:04.500229 2502 update_attempter.cc:619] Update failed. Jan 23 18:55:04.500305 update_engine[2502]: I20260123 18:55:04.500234 2502 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 18:55:04.500305 update_engine[2502]: I20260123 18:55:04.500238 2502 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 18:55:04.500305 update_engine[2502]: I20260123 18:55:04.500244 2502 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 18:55:04.500391 update_engine[2502]: I20260123 18:55:04.500319 2502 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 18:55:04.500391 update_engine[2502]: I20260123 18:55:04.500342 2502 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 18:55:04.500391 update_engine[2502]: I20260123 18:55:04.500347 2502 omaha_request_action.cc:272] Request: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: Jan 23 18:55:04.500391 update_engine[2502]: I20260123 18:55:04.500353 2502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:55:04.500391 update_engine[2502]: I20260123 18:55:04.500372 2502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:55:04.500874 locksmithd[2594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 18:55:04.501086 update_engine[2502]: I20260123 18:55:04.500880 2502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:55:04.547316 update_engine[2502]: E20260123 18:55:04.547277 2502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547334 2502 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547341 2502 omaha_request_action.cc:617] Omaha request response: Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547346 2502 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547351 2502 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547355 2502 update_attempter.cc:306] Processing Done. Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547360 2502 update_attempter.cc:310] Error event sent. Jan 23 18:55:04.547393 update_engine[2502]: I20260123 18:55:04.547369 2502 update_check_scheduler.cc:74] Next update check in 45m13s Jan 23 18:55:04.547722 locksmithd[2594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 18:55:04.892244 kubelet[3990]: E0123 18:55:04.892191 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:55:04.892244 kubelet[3990]: E0123 18:55:04.892191 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:55:05.894763 kubelet[3990]: E0123 18:55:05.894717 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:55:08.586769 systemd[1]: Started sshd@20-10.200.8.14:22-10.200.16.10:42446.service - OpenSSH per-connection server daemon (10.200.16.10:42446). Jan 23 18:55:08.592509 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:55:08.592587 kernel: audit: type=1130 audit(1769194508.585:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:42446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:08.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:42446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:09.169500 sshd[6466]: Accepted publickey for core from 10.200.16.10 port 42446 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:55:09.167000 audit[6466]: USER_ACCT pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.172974 sshd-session[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:55:09.178548 kernel: audit: type=1101 audit(1769194509.167:885): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.170000 audit[6466]: CRED_ACQ pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.187469 systemd-logind[2501]: New session 24 of user core. Jan 23 18:55:09.192501 kernel: audit: type=1103 audit(1769194509.170:886): pid=6466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.197492 kernel: audit: type=1006 audit(1769194509.170:887): pid=6466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 18:55:09.202881 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:55:09.170000 audit[6466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4679b30 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:09.212499 kernel: audit: type=1300 audit(1769194509.170:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4679b30 a2=3 a3=0 items=0 ppid=1 pid=6466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:09.170000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:09.206000 audit[6466]: USER_START pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.226251 kernel: audit: type=1327 audit(1769194509.170:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:09.226296 kernel: audit: type=1105 audit(1769194509.206:888): pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.209000 audit[6470]: CRED_ACQ pid=6470 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.241501 kernel: audit: type=1103 audit(1769194509.209:889): pid=6470 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.532418 sshd[6470]: Connection closed by 10.200.16.10 port 42446 Jan 23 18:55:09.533644 sshd-session[6466]: pam_unix(sshd:session): session closed for user core Jan 23 18:55:09.533000 audit[6466]: USER_END pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.537994 systemd[1]: sshd@20-10.200.8.14:22-10.200.16.10:42446.service: Deactivated successfully. Jan 23 18:55:09.539930 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:55:09.542627 systemd-logind[2501]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:55:09.543529 systemd-logind[2501]: Removed session 24. Jan 23 18:55:09.533000 audit[6466]: CRED_DISP pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.547999 kernel: audit: type=1106 audit(1769194509.533:890): pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.548113 kernel: audit: type=1104 audit(1769194509.533:891): pid=6466 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:09.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:42446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:14.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:14.649374 systemd[1]: Started sshd@21-10.200.8.14:22-10.200.16.10:56494.service - OpenSSH per-connection server daemon (10.200.16.10:56494). Jan 23 18:55:14.650906 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:55:14.650941 kernel: audit: type=1130 audit(1769194514.649:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:15.213840 sshd[6508]: Accepted publickey for core from 10.200.16.10 port 56494 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:55:15.213000 audit[6508]: USER_ACCT pid=6508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.221501 kernel: audit: type=1101 audit(1769194515.213:894): pid=6508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.222660 sshd-session[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:55:15.230920 systemd-logind[2501]: New session 25 of user core. Jan 23 18:55:15.231991 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:55:15.221000 audit[6508]: CRED_ACQ pid=6508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.238502 kernel: audit: type=1103 audit(1769194515.221:895): pid=6508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.247898 kernel: audit: type=1006 audit(1769194515.221:896): pid=6508 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 18:55:15.221000 audit[6508]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd065ef00 a2=3 a3=0 items=0 ppid=1 pid=6508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:15.254505 kernel: audit: type=1300 audit(1769194515.221:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd065ef00 a2=3 a3=0 items=0 ppid=1 pid=6508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:15.221000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:15.257491 kernel: audit: type=1327 audit(1769194515.221:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:15.242000 audit[6508]: USER_START pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.266964 kernel: audit: type=1105 audit(1769194515.242:897): pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.247000 audit[6512]: CRED_ACQ pid=6512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.274497 kernel: audit: type=1103 audit(1769194515.247:898): pid=6512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.610902 sshd[6512]: Connection closed by 10.200.16.10 port 56494 Jan 23 18:55:15.611665 sshd-session[6508]: pam_unix(sshd:session): session closed for user core Jan 23 18:55:15.612000 audit[6508]: USER_END pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.617951 systemd[1]: sshd@21-10.200.8.14:22-10.200.16.10:56494.service: Deactivated successfully. Jan 23 18:55:15.619531 kernel: audit: type=1106 audit(1769194515.612:899): pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.620541 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:55:15.622597 systemd-logind[2501]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:55:15.612000 audit[6508]: CRED_DISP pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.628558 kernel: audit: type=1104 audit(1769194515.612:900): pid=6508 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:15.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:15.628962 systemd-logind[2501]: Removed session 25. Jan 23 18:55:15.892229 kubelet[3990]: E0123 18:55:15.891953 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:55:16.894492 kubelet[3990]: E0123 18:55:16.894435 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-f4rb7" podUID="c7f09343-3d0b-4264-987b-68763f2830ab" Jan 23 18:55:16.896706 kubelet[3990]: E0123 18:55:16.895284 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:55:16.897500 kubelet[3990]: E0123 18:55:16.897212 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:55:17.895787 kubelet[3990]: E0123 18:55:17.895729 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8686dc9b89-kzk6x" podUID="c2f1acaa-9237-4a56-b34a-eb28ae8b7529" Jan 23 18:55:19.895531 kubelet[3990]: E0123 18:55:19.895129 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:55:20.732779 systemd[1]: Started sshd@22-10.200.8.14:22-10.200.16.10:60668.service - OpenSSH per-connection server daemon (10.200.16.10:60668). Jan 23 18:55:20.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:60668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:20.734298 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:55:20.734358 kernel: audit: type=1130 audit(1769194520.731:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:60668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:21.304000 audit[6524]: USER_ACCT pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.311069 sshd[6524]: Accepted publickey for core from 10.200.16.10 port 60668 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:55:21.309000 audit[6524]: CRED_ACQ pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.312584 sshd-session[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:55:21.318632 kernel: audit: type=1101 audit(1769194521.304:903): pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.318771 kernel: audit: type=1103 audit(1769194521.309:904): pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.322774 kernel: audit: type=1006 audit(1769194521.309:905): pid=6524 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 18:55:21.326814 kernel: audit: type=1300 audit(1769194521.309:905): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe271e980 a2=3 a3=0 items=0 ppid=1 pid=6524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:21.309000 audit[6524]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe271e980 a2=3 a3=0 items=0 ppid=1 pid=6524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:21.327289 systemd-logind[2501]: New session 26 of user core. Jan 23 18:55:21.309000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:21.330806 kernel: audit: type=1327 audit(1769194521.309:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:21.334823 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:55:21.340000 audit[6524]: USER_START pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.348537 kernel: audit: type=1105 audit(1769194521.340:906): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.345000 audit[6528]: CRED_ACQ pid=6528 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.356556 kernel: audit: type=1103 audit(1769194521.345:907): pid=6528 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.742351 sshd[6528]: Connection closed by 10.200.16.10 port 60668 Jan 23 18:55:21.744373 sshd-session[6524]: pam_unix(sshd:session): session closed for user core Jan 23 18:55:21.744000 audit[6524]: USER_END pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.754485 systemd[1]: sshd@22-10.200.8.14:22-10.200.16.10:60668.service: Deactivated successfully. Jan 23 18:55:21.756712 kernel: audit: type=1106 audit(1769194521.744:908): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.744000 audit[6524]: CRED_DISP pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.758994 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:55:21.761215 systemd-logind[2501]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:55:21.765781 kernel: audit: type=1104 audit(1769194521.744:909): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:21.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:60668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:21.767806 systemd-logind[2501]: Removed session 26. Jan 23 18:55:26.877862 systemd[1]: Started sshd@23-10.200.8.14:22-10.200.16.10:60678.service - OpenSSH per-connection server daemon (10.200.16.10:60678). Jan 23 18:55:26.887914 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:55:26.888027 kernel: audit: type=1130 audit(1769194526.876:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.14:22-10.200.16.10:60678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:26.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.14:22-10.200.16.10:60678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:27.441000 audit[6539]: USER_ACCT pid=6539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.442290 sshd[6539]: Accepted publickey for core from 10.200.16.10 port 60678 ssh2: RSA SHA256:f/XaV1Zp/roiywP0gyAVgKeF5JpVLQrZkQkJHn/0jSg Jan 23 18:55:27.446624 sshd-session[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:55:27.445000 audit[6539]: CRED_ACQ pid=6539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.450170 kernel: audit: type=1101 audit(1769194527.441:912): pid=6539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.450231 kernel: audit: type=1103 audit(1769194527.445:913): pid=6539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.453684 kernel: audit: type=1006 audit(1769194527.445:914): pid=6539 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 18:55:27.445000 audit[6539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd3807f60 a2=3 a3=0 items=0 ppid=1 pid=6539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:27.460571 kernel: audit: type=1300 audit(1769194527.445:914): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd3807f60 a2=3 a3=0 items=0 ppid=1 pid=6539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:55:27.460643 kernel: audit: type=1327 audit(1769194527.445:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:27.445000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:55:27.460746 systemd-logind[2501]: New session 27 of user core. Jan 23 18:55:27.467678 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:55:27.469000 audit[6539]: USER_START pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.476512 kernel: audit: type=1105 audit(1769194527.469:915): pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.473000 audit[6543]: CRED_ACQ pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.482552 kernel: audit: type=1103 audit(1769194527.473:916): pid=6543 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.860119 sshd[6543]: Connection closed by 10.200.16.10 port 60678 Jan 23 18:55:27.860418 sshd-session[6539]: pam_unix(sshd:session): session closed for user core Jan 23 18:55:27.862000 audit[6539]: USER_END pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.871530 kernel: audit: type=1106 audit(1769194527.862:917): pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.862000 audit[6539]: CRED_DISP pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.873516 systemd[1]: sshd@23-10.200.8.14:22-10.200.16.10:60678.service: Deactivated successfully. Jan 23 18:55:27.878846 kernel: audit: type=1104 audit(1769194527.862:918): pid=6539 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 18:55:27.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.14:22-10.200.16.10:60678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:55:27.879736 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:55:27.881693 systemd-logind[2501]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:55:27.882709 systemd-logind[2501]: Removed session 27. Jan 23 18:55:27.894502 kubelet[3990]: E0123 18:55:27.894310 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6658c89489-trg8d" podUID="0fe1ccdb-f11d-478d-b8c5-50e7a678ae44" Jan 23 18:55:27.895938 kubelet[3990]: E0123 18:55:27.895883 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-774b9649d4-hsh9h" podUID="7886516f-3341-4184-8abc-3d16d954f0c6" Jan 23 18:55:30.893576 kubelet[3990]: E0123 18:55:30.893065 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22mgp" podUID="12936b13-6ad9-4c1b-a913-2f3039ac097a" Jan 23 18:55:30.894293 kubelet[3990]: E0123 18:55:30.894196 3990 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-slbmv" podUID="ad1b7350-c4c8-43d5-adb7-51075adcd4fd" Jan 23 18:55:30.918171 kubelet[3990]: E0123 18:55:30.917983 3990 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: EOF" event="&Event{ObjectMeta:{goldmane-666569f655-22mgp.188d70eee93660a5 calico-system 1588 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-22mgp,UID:12936b13-6ad9-4c1b-a913-2f3039ac097a,APIVersion:v1,ResourceVersion:785,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-90f1f3b2aa,},FirstTimestamp:2026-01-23 18:53:20 +0000 UTC,LastTimestamp:2026-01-23 18:55:30.893017326 +0000 UTC m=+177.096652720,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-90f1f3b2aa,}"